00:00:00.000 Started by upstream project "autotest-per-patch" build number 126199 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "jbp-per-patch" build number 23960 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.053 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.054 The recommended git tool is: git 00:00:00.054 using credential 00000000-0000-0000-0000-000000000002 00:00:00.061 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.082 Fetching changes from the remote Git repository 00:00:00.084 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.108 Using shallow fetch with depth 1 00:00:00.108 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.108 > git --version # timeout=10 00:00:00.123 > git --version # 'git version 2.39.2' 00:00:00.123 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.136 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.136 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/56/22956/10 # timeout=5 00:00:04.792 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.803 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.815 Checking out Revision d49304e16352441ae7eebb2419125dd094201f3e (FETCH_HEAD) 00:00:04.815 > git config core.sparsecheckout # timeout=10 00:00:04.825 > git read-tree -mu HEAD # timeout=10 00:00:04.841 > git checkout -f d49304e16352441ae7eebb2419125dd094201f3e # timeout=5 00:00:04.871 Commit message: "jenkins/jjb-config: Add ubuntu2404 to per-patch and nightly testing" 00:00:04.871 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:04.973 [Pipeline] Start of Pipeline 00:00:04.985 [Pipeline] library 00:00:04.986 Loading library shm_lib@master 00:00:04.986 Library shm_lib@master is cached. Copying from home. 00:00:05.001 [Pipeline] node 00:00:05.011 Running on VM-host-SM9 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.013 [Pipeline] { 00:00:05.024 [Pipeline] catchError 00:00:05.025 [Pipeline] { 00:00:05.035 [Pipeline] wrap 00:00:05.044 [Pipeline] { 00:00:05.050 [Pipeline] stage 00:00:05.051 [Pipeline] { (Prologue) 00:00:05.068 [Pipeline] echo 00:00:05.069 Node: VM-host-SM9 00:00:05.074 [Pipeline] cleanWs 00:00:05.083 [WS-CLEANUP] Deleting project workspace... 00:00:05.083 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.088 [WS-CLEANUP] done 00:00:05.303 [Pipeline] setCustomBuildProperty 00:00:05.384 [Pipeline] httpRequest 00:00:05.406 [Pipeline] echo 00:00:05.408 Sorcerer 10.211.164.101 is alive 00:00:05.415 [Pipeline] httpRequest 00:00:05.419 HttpMethod: GET 00:00:05.419 URL: http://10.211.164.101/packages/jbp_d49304e16352441ae7eebb2419125dd094201f3e.tar.gz 00:00:05.420 Sending request to url: http://10.211.164.101/packages/jbp_d49304e16352441ae7eebb2419125dd094201f3e.tar.gz 00:00:05.425 Response Code: HTTP/1.1 200 OK 00:00:05.426 Success: Status code 200 is in the accepted range: 200,404 00:00:05.426 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_d49304e16352441ae7eebb2419125dd094201f3e.tar.gz 00:00:09.508 [Pipeline] sh 00:00:09.788 + tar --no-same-owner -xf jbp_d49304e16352441ae7eebb2419125dd094201f3e.tar.gz 00:00:09.804 [Pipeline] httpRequest 00:00:09.835 [Pipeline] echo 00:00:09.836 Sorcerer 10.211.164.101 is alive 00:00:09.845 [Pipeline] httpRequest 00:00:09.849 HttpMethod: GET 00:00:09.850 URL: http://10.211.164.101/packages/spdk_a95bbf2336179ce1093307c872b1debc25193da2.tar.gz 00:00:09.850 Sending request to url: http://10.211.164.101/packages/spdk_a95bbf2336179ce1093307c872b1debc25193da2.tar.gz 00:00:09.869 Response Code: HTTP/1.1 200 OK 00:00:09.869 Success: Status code 200 is in the accepted range: 200,404 00:00:09.870 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_a95bbf2336179ce1093307c872b1debc25193da2.tar.gz 00:02:35.054 [Pipeline] sh 00:02:35.332 + tar --no-same-owner -xf spdk_a95bbf2336179ce1093307c872b1debc25193da2.tar.gz 00:02:38.622 [Pipeline] sh 00:02:38.904 + git -C spdk log --oneline -n5 00:02:38.905 a95bbf233 blob: set parent_id properly on spdk_bs_blob_set_external_parent. 00:02:38.905 248c547d0 nvmf/tcp: add option for selecting a sock impl 00:02:38.905 2d30d9f83 accel: introduce tasks in sequence limit 00:02:38.905 2728651ee accel: adjust task per ch define name 00:02:38.905 e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:02:38.918 [Pipeline] writeFile 00:02:38.931 [Pipeline] sh 00:02:39.205 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:39.217 [Pipeline] sh 00:02:39.497 + cat autorun-spdk.conf 00:02:39.497 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:39.497 SPDK_TEST_NVME=1 00:02:39.497 SPDK_TEST_FTL=1 00:02:39.497 SPDK_TEST_ISAL=1 00:02:39.497 SPDK_RUN_ASAN=1 00:02:39.497 SPDK_RUN_UBSAN=1 00:02:39.497 SPDK_TEST_XNVME=1 00:02:39.497 SPDK_TEST_NVME_FDP=1 00:02:39.497 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:39.502 RUN_NIGHTLY=0 00:02:39.504 [Pipeline] } 00:02:39.520 [Pipeline] // stage 00:02:39.537 [Pipeline] stage 00:02:39.539 [Pipeline] { (Run VM) 00:02:39.552 [Pipeline] sh 00:02:39.831 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:39.831 + echo 'Start stage prepare_nvme.sh' 00:02:39.831 Start stage prepare_nvme.sh 00:02:39.831 + [[ -n 4 ]] 00:02:39.831 + disk_prefix=ex4 00:02:39.831 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:02:39.831 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:02:39.831 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:02:39.831 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:39.831 ++ SPDK_TEST_NVME=1 00:02:39.831 ++ SPDK_TEST_FTL=1 00:02:39.831 ++ SPDK_TEST_ISAL=1 00:02:39.831 ++ SPDK_RUN_ASAN=1 00:02:39.831 ++ SPDK_RUN_UBSAN=1 00:02:39.831 ++ SPDK_TEST_XNVME=1 00:02:39.831 ++ SPDK_TEST_NVME_FDP=1 00:02:39.831 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:39.831 ++ RUN_NIGHTLY=0 00:02:39.831 + cd /var/jenkins/workspace/nvme-vg-autotest 00:02:39.832 + nvme_files=() 00:02:39.832 + declare -A nvme_files 00:02:39.832 + backend_dir=/var/lib/libvirt/images/backends 00:02:39.832 + nvme_files['nvme.img']=5G 00:02:39.832 + nvme_files['nvme-cmb.img']=5G 00:02:39.832 + nvme_files['nvme-multi0.img']=4G 00:02:39.832 + nvme_files['nvme-multi1.img']=4G 00:02:39.832 + nvme_files['nvme-multi2.img']=4G 00:02:39.832 + nvme_files['nvme-openstack.img']=8G 00:02:39.832 + nvme_files['nvme-zns.img']=5G 00:02:39.832 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:39.832 + (( SPDK_TEST_FTL == 1 )) 00:02:39.832 + nvme_files["nvme-ftl.img"]=6G 00:02:39.832 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:39.832 + nvme_files["nvme-fdp.img"]=1G 00:02:39.832 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:39.832 + for nvme in "${!nvme_files[@]}" 00:02:39.832 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:02:39.832 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:39.832 + for nvme in "${!nvme_files[@]}" 00:02:39.832 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-ftl.img -s 6G 00:02:39.832 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:39.832 + for nvme in "${!nvme_files[@]}" 00:02:39.832 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:02:39.832 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:39.832 + for nvme in "${!nvme_files[@]}" 00:02:39.832 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:02:39.832 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:39.832 + for nvme in "${!nvme_files[@]}" 00:02:39.832 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:02:40.090 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:40.090 + for nvme in "${!nvme_files[@]}" 00:02:40.090 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:02:40.090 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:40.090 + for nvme in "${!nvme_files[@]}" 00:02:40.090 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:02:40.090 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:40.090 + for nvme in "${!nvme_files[@]}" 00:02:40.090 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-fdp.img -s 1G 00:02:40.090 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:40.090 + for nvme in "${!nvme_files[@]}" 00:02:40.090 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:02:40.348 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:40.348 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:02:40.348 + echo 'End stage prepare_nvme.sh' 00:02:40.348 End stage prepare_nvme.sh 00:02:40.360 [Pipeline] sh 00:02:40.640 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:40.640 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex4-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:02:40.640 00:02:40.640 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:40.640 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:40.640 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:40.640 HELP=0 00:02:40.640 DRY_RUN=0 00:02:40.640 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,/var/lib/libvirt/images/backends/ex4-nvme-fdp.img, 00:02:40.640 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:40.640 NVME_AUTO_CREATE=0 00:02:40.640 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,, 00:02:40.640 NVME_CMB=,,,, 00:02:40.640 NVME_PMR=,,,, 00:02:40.640 NVME_ZNS=,,,, 00:02:40.640 NVME_MS=true,,,, 00:02:40.640 NVME_FDP=,,,on, 00:02:40.640 SPDK_VAGRANT_DISTRO=fedora38 00:02:40.640 SPDK_VAGRANT_VMCPU=10 00:02:40.640 SPDK_VAGRANT_VMRAM=12288 00:02:40.640 SPDK_VAGRANT_PROVIDER=libvirt 00:02:40.640 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:40.640 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:40.640 SPDK_OPENSTACK_NETWORK=0 00:02:40.640 VAGRANT_PACKAGE_BOX=0 00:02:40.640 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:40.640 FORCE_DISTRO=true 00:02:40.640 VAGRANT_BOX_VERSION= 00:02:40.640 EXTRA_VAGRANTFILES= 00:02:40.640 NIC_MODEL=e1000 00:02:40.640 00:02:40.640 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt' 00:02:40.640 /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:44.825 Bringing machine 'default' up with 'libvirt' provider... 00:02:45.084 ==> default: Creating image (snapshot of base box volume). 00:02:45.084 ==> default: Creating domain with the following settings... 00:02:45.084 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721051049_803d1cf19cd158e99a66 00:02:45.084 ==> default: -- Domain type: kvm 00:02:45.084 ==> default: -- Cpus: 10 00:02:45.084 ==> default: -- Feature: acpi 00:02:45.084 ==> default: -- Feature: apic 00:02:45.084 ==> default: -- Feature: pae 00:02:45.084 ==> default: -- Memory: 12288M 00:02:45.084 ==> default: -- Memory Backing: hugepages: 00:02:45.084 ==> default: -- Management MAC: 00:02:45.084 ==> default: -- Loader: 00:02:45.084 ==> default: -- Nvram: 00:02:45.084 ==> default: -- Base box: spdk/fedora38 00:02:45.084 ==> default: -- Storage pool: default 00:02:45.084 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721051049_803d1cf19cd158e99a66.img (20G) 00:02:45.084 ==> default: -- Volume Cache: default 00:02:45.084 ==> default: -- Kernel: 00:02:45.084 ==> default: -- Initrd: 00:02:45.084 ==> default: -- Graphics Type: vnc 00:02:45.084 ==> default: -- Graphics Port: -1 00:02:45.084 ==> default: -- Graphics IP: 127.0.0.1 00:02:45.084 ==> default: -- Graphics Password: Not defined 00:02:45.084 ==> default: -- Video Type: cirrus 00:02:45.084 ==> default: -- Video VRAM: 9216 00:02:45.084 ==> default: -- Sound Type: 00:02:45.084 ==> default: -- Keymap: en-us 00:02:45.084 ==> default: -- TPM Path: 00:02:45.084 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:45.084 ==> default: -- Command line args: 00:02:45.084 ==> default: -> value=-device, 00:02:45.084 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:45.084 ==> default: -> value=-drive, 00:02:45.084 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:45.084 ==> default: -> value=-device, 00:02:45.084 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:45.084 ==> default: -> value=-device, 00:02:45.084 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:45.084 ==> default: -> value=-drive, 00:02:45.084 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-1-drive0, 00:02:45.084 ==> default: -> value=-device, 00:02:45.084 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:45.084 ==> default: -> value=-device, 00:02:45.084 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:45.084 ==> default: -> value=-drive, 00:02:45.084 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:45.084 ==> default: -> value=-device, 00:02:45.084 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:45.084 ==> default: -> value=-drive, 00:02:45.084 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:45.084 ==> default: -> value=-device, 00:02:45.084 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:45.084 ==> default: -> value=-drive, 00:02:45.084 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:45.084 ==> default: -> value=-device, 00:02:45.084 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:45.084 ==> default: -> value=-device, 00:02:45.084 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:45.084 ==> default: -> value=-device, 00:02:45.084 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:45.084 ==> default: -> value=-drive, 00:02:45.084 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:45.084 ==> default: -> value=-device, 00:02:45.084 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:45.345 ==> default: Creating shared folders metadata... 00:02:45.345 ==> default: Starting domain. 00:02:46.725 ==> default: Waiting for domain to get an IP address... 00:03:08.650 ==> default: Waiting for SSH to become available... 00:03:09.214 ==> default: Configuring and enabling network interfaces... 00:03:13.395 default: SSH address: 192.168.121.85:22 00:03:13.395 default: SSH username: vagrant 00:03:13.395 default: SSH auth method: private key 00:03:15.314 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:23.418 ==> default: Mounting SSHFS shared folder... 00:03:24.838 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:03:24.838 ==> default: Checking Mount.. 00:03:25.772 ==> default: Folder Successfully Mounted! 00:03:25.772 ==> default: Running provisioner: file... 00:03:26.704 default: ~/.gitconfig => .gitconfig 00:03:27.268 00:03:27.268 SUCCESS! 00:03:27.268 00:03:27.268 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:03:27.268 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:27.268 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:03:27.268 00:03:27.276 [Pipeline] } 00:03:27.296 [Pipeline] // stage 00:03:27.306 [Pipeline] dir 00:03:27.306 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt 00:03:27.308 [Pipeline] { 00:03:27.321 [Pipeline] catchError 00:03:27.322 [Pipeline] { 00:03:27.333 [Pipeline] sh 00:03:27.606 + vagrant ssh-config --host vagrant 00:03:27.606 + sed -ne /^Host/,$p 00:03:27.606 + tee ssh_conf 00:03:31.784 Host vagrant 00:03:31.784 HostName 192.168.121.85 00:03:31.784 User vagrant 00:03:31.784 Port 22 00:03:31.784 UserKnownHostsFile /dev/null 00:03:31.784 StrictHostKeyChecking no 00:03:31.784 PasswordAuthentication no 00:03:31.784 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:03:31.784 IdentitiesOnly yes 00:03:31.784 LogLevel FATAL 00:03:31.784 ForwardAgent yes 00:03:31.784 ForwardX11 yes 00:03:31.784 00:03:31.796 [Pipeline] withEnv 00:03:31.798 [Pipeline] { 00:03:31.811 [Pipeline] sh 00:03:32.090 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:32.090 source /etc/os-release 00:03:32.090 [[ -e /image.version ]] && img=$(< /image.version) 00:03:32.090 # Minimal, systemd-like check. 00:03:32.090 if [[ -e /.dockerenv ]]; then 00:03:32.090 # Clear garbage from the node's name: 00:03:32.090 # agt-er_autotest_547-896 -> autotest_547-896 00:03:32.090 # $HOSTNAME is the actual container id 00:03:32.090 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:32.090 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:32.090 # We can assume this is a mount from a host where container is running, 00:03:32.090 # so fetch its hostname to easily identify the target swarm worker. 00:03:32.090 container="$(< /etc/hostname) ($agent)" 00:03:32.090 else 00:03:32.090 # Fallback 00:03:32.090 container=$agent 00:03:32.090 fi 00:03:32.090 fi 00:03:32.090 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:32.090 00:03:32.100 [Pipeline] } 00:03:32.121 [Pipeline] // withEnv 00:03:32.130 [Pipeline] setCustomBuildProperty 00:03:32.145 [Pipeline] stage 00:03:32.147 [Pipeline] { (Tests) 00:03:32.167 [Pipeline] sh 00:03:32.444 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:32.461 [Pipeline] sh 00:03:32.739 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:33.012 [Pipeline] timeout 00:03:33.012 Timeout set to expire in 40 min 00:03:33.014 [Pipeline] { 00:03:33.035 [Pipeline] sh 00:03:33.347 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:33.913 HEAD is now at a95bbf233 blob: set parent_id properly on spdk_bs_blob_set_external_parent. 00:03:33.926 [Pipeline] sh 00:03:34.202 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:34.475 [Pipeline] sh 00:03:34.754 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:34.774 [Pipeline] sh 00:03:35.051 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:03:35.051 ++ readlink -f spdk_repo 00:03:35.051 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:35.051 + [[ -n /home/vagrant/spdk_repo ]] 00:03:35.051 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:35.051 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:35.051 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:35.051 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:35.051 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:35.051 + [[ nvme-vg-autotest == pkgdep-* ]] 00:03:35.051 + cd /home/vagrant/spdk_repo 00:03:35.051 + source /etc/os-release 00:03:35.051 ++ NAME='Fedora Linux' 00:03:35.051 ++ VERSION='38 (Cloud Edition)' 00:03:35.051 ++ ID=fedora 00:03:35.051 ++ VERSION_ID=38 00:03:35.051 ++ VERSION_CODENAME= 00:03:35.051 ++ PLATFORM_ID=platform:f38 00:03:35.051 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:03:35.051 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:35.051 ++ LOGO=fedora-logo-icon 00:03:35.051 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:03:35.051 ++ HOME_URL=https://fedoraproject.org/ 00:03:35.051 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:03:35.051 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:35.051 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:35.051 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:35.051 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:03:35.051 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:35.051 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:03:35.051 ++ SUPPORT_END=2024-05-14 00:03:35.051 ++ VARIANT='Cloud Edition' 00:03:35.051 ++ VARIANT_ID=cloud 00:03:35.051 + uname -a 00:03:35.051 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:03:35.051 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:35.618 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:35.876 Hugepages 00:03:35.876 node hugesize free / total 00:03:35.876 node0 1048576kB 0 / 0 00:03:35.876 node0 2048kB 0 / 0 00:03:35.876 00:03:35.876 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:35.876 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:35.876 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:35.876 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:35.876 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:03:35.876 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:35.876 + rm -f /tmp/spdk-ld-path 00:03:35.876 + source autorun-spdk.conf 00:03:35.876 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:35.876 ++ SPDK_TEST_NVME=1 00:03:35.876 ++ SPDK_TEST_FTL=1 00:03:35.876 ++ SPDK_TEST_ISAL=1 00:03:35.876 ++ SPDK_RUN_ASAN=1 00:03:35.876 ++ SPDK_RUN_UBSAN=1 00:03:35.876 ++ SPDK_TEST_XNVME=1 00:03:35.876 ++ SPDK_TEST_NVME_FDP=1 00:03:35.876 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:35.876 ++ RUN_NIGHTLY=0 00:03:35.876 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:35.876 + [[ -n '' ]] 00:03:35.876 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:35.876 + for M in /var/spdk/build-*-manifest.txt 00:03:35.876 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:35.876 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:35.876 + for M in /var/spdk/build-*-manifest.txt 00:03:35.876 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:35.876 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:35.876 ++ uname 00:03:35.876 + [[ Linux == \L\i\n\u\x ]] 00:03:35.876 + sudo dmesg -T 00:03:35.876 + sudo dmesg --clear 00:03:35.876 + dmesg_pid=5202 00:03:35.876 + sudo dmesg -Tw 00:03:35.876 + [[ Fedora Linux == FreeBSD ]] 00:03:35.876 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:35.876 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:35.876 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:35.876 + [[ -x /usr/src/fio-static/fio ]] 00:03:35.876 + export FIO_BIN=/usr/src/fio-static/fio 00:03:35.876 + FIO_BIN=/usr/src/fio-static/fio 00:03:35.876 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:35.876 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:35.876 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:35.876 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:35.876 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:35.876 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:35.876 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:35.876 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:35.876 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:35.876 Test configuration: 00:03:35.876 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:35.876 SPDK_TEST_NVME=1 00:03:35.876 SPDK_TEST_FTL=1 00:03:35.876 SPDK_TEST_ISAL=1 00:03:35.876 SPDK_RUN_ASAN=1 00:03:35.876 SPDK_RUN_UBSAN=1 00:03:35.876 SPDK_TEST_XNVME=1 00:03:35.876 SPDK_TEST_NVME_FDP=1 00:03:35.876 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:36.135 RUN_NIGHTLY=0 13:45:00 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:36.135 13:45:00 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:36.135 13:45:00 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:36.135 13:45:00 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:36.135 13:45:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:36.135 13:45:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:36.135 13:45:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:36.135 13:45:00 -- paths/export.sh@5 -- $ export PATH 00:03:36.135 13:45:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:36.135 13:45:00 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:36.135 13:45:00 -- common/autobuild_common.sh@444 -- $ date +%s 00:03:36.135 13:45:00 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721051100.XXXXXX 00:03:36.135 13:45:00 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721051100.sxlBkg 00:03:36.135 13:45:00 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:03:36.135 13:45:00 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:03:36.135 13:45:00 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:36.135 13:45:00 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:36.135 13:45:00 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:36.135 13:45:00 -- common/autobuild_common.sh@460 -- $ get_config_params 00:03:36.135 13:45:00 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:03:36.135 13:45:00 -- common/autotest_common.sh@10 -- $ set +x 00:03:36.135 13:45:00 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:03:36.135 13:45:00 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:03:36.135 13:45:00 -- pm/common@17 -- $ local monitor 00:03:36.135 13:45:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:36.135 13:45:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:36.135 13:45:00 -- pm/common@25 -- $ sleep 1 00:03:36.135 13:45:00 -- pm/common@21 -- $ date +%s 00:03:36.135 13:45:00 -- pm/common@21 -- $ date +%s 00:03:36.135 13:45:00 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721051100 00:03:36.135 13:45:00 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721051100 00:03:36.135 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721051100_collect-vmstat.pm.log 00:03:36.135 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721051100_collect-cpu-load.pm.log 00:03:37.067 13:45:01 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:03:37.067 13:45:01 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:37.067 13:45:01 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:37.068 13:45:01 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:37.068 13:45:01 -- spdk/autobuild.sh@16 -- $ date -u 00:03:37.068 Mon Jul 15 01:45:01 PM UTC 2024 00:03:37.068 13:45:01 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:37.068 v24.09-pre-209-ga95bbf233 00:03:37.068 13:45:01 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:37.068 13:45:01 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:37.068 13:45:01 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:37.068 13:45:01 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:37.068 13:45:01 -- common/autotest_common.sh@10 -- $ set +x 00:03:37.068 ************************************ 00:03:37.068 START TEST asan 00:03:37.068 ************************************ 00:03:37.068 using asan 00:03:37.068 13:45:01 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:03:37.068 00:03:37.068 real 0m0.000s 00:03:37.068 user 0m0.000s 00:03:37.068 sys 0m0.000s 00:03:37.068 13:45:01 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:37.068 13:45:01 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:37.068 ************************************ 00:03:37.068 END TEST asan 00:03:37.068 ************************************ 00:03:37.068 13:45:01 -- common/autotest_common.sh@1142 -- $ return 0 00:03:37.068 13:45:01 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:37.068 13:45:01 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:37.068 13:45:01 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:37.068 13:45:01 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:37.068 13:45:01 -- common/autotest_common.sh@10 -- $ set +x 00:03:37.068 ************************************ 00:03:37.068 START TEST ubsan 00:03:37.068 ************************************ 00:03:37.068 using ubsan 00:03:37.068 13:45:01 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:03:37.068 00:03:37.068 real 0m0.000s 00:03:37.068 user 0m0.000s 00:03:37.068 sys 0m0.000s 00:03:37.068 13:45:01 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:37.068 13:45:01 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:37.068 ************************************ 00:03:37.068 END TEST ubsan 00:03:37.068 ************************************ 00:03:37.068 13:45:01 -- common/autotest_common.sh@1142 -- $ return 0 00:03:37.068 13:45:01 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:37.068 13:45:01 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:37.068 13:45:01 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:37.068 13:45:01 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:37.068 13:45:01 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:37.068 13:45:01 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:37.068 13:45:01 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:37.068 13:45:01 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:37.068 13:45:01 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:03:37.325 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:37.325 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:37.892 Using 'verbs' RDMA provider 00:03:51.077 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:03.317 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:03.317 Creating mk/config.mk...done. 00:04:03.317 Creating mk/cc.flags.mk...done. 00:04:03.317 Type 'make' to build. 00:04:03.317 13:45:27 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:04:03.317 13:45:27 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:04:03.317 13:45:27 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:03.317 13:45:27 -- common/autotest_common.sh@10 -- $ set +x 00:04:03.317 ************************************ 00:04:03.317 START TEST make 00:04:03.317 ************************************ 00:04:03.317 13:45:27 make -- common/autotest_common.sh@1123 -- $ make -j10 00:04:03.317 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:04:03.317 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:04:03.317 meson setup builddir \ 00:04:03.317 -Dwith-libaio=enabled \ 00:04:03.317 -Dwith-liburing=enabled \ 00:04:03.317 -Dwith-libvfn=disabled \ 00:04:03.317 -Dwith-spdk=false && \ 00:04:03.317 meson compile -C builddir && \ 00:04:03.317 cd -) 00:04:03.317 make[1]: Nothing to be done for 'all'. 00:04:09.867 The Meson build system 00:04:09.867 Version: 1.3.1 00:04:09.867 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:04:09.867 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:09.867 Build type: native build 00:04:09.867 Project name: xnvme 00:04:09.867 Project version: 0.7.3 00:04:09.867 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:04:09.868 C linker for the host machine: cc ld.bfd 2.39-16 00:04:09.868 Host machine cpu family: x86_64 00:04:09.868 Host machine cpu: x86_64 00:04:09.868 Message: host_machine.system: linux 00:04:09.868 Compiler for C supports arguments -Wno-missing-braces: YES 00:04:09.868 Compiler for C supports arguments -Wno-cast-function-type: YES 00:04:09.868 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:04:09.868 Run-time dependency threads found: YES 00:04:09.868 Has header "setupapi.h" : NO 00:04:09.868 Has header "linux/blkzoned.h" : YES 00:04:09.868 Has header "linux/blkzoned.h" : YES (cached) 00:04:09.868 Has header "libaio.h" : YES 00:04:09.868 Library aio found: YES 00:04:09.868 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:04:09.868 Run-time dependency liburing found: YES 2.2 00:04:09.868 Dependency libvfn skipped: feature with-libvfn disabled 00:04:09.868 Run-time dependency appleframeworks found: NO (tried framework) 00:04:09.868 Run-time dependency appleframeworks found: NO (tried framework) 00:04:09.868 Configuring xnvme_config.h using configuration 00:04:09.868 Configuring xnvme.spec using configuration 00:04:09.868 Run-time dependency bash-completion found: YES 2.11 00:04:09.868 Message: Bash-completions: /usr/share/bash-completion/completions 00:04:09.868 Program cp found: YES (/usr/bin/cp) 00:04:09.868 Has header "winsock2.h" : NO 00:04:09.868 Has header "dbghelp.h" : NO 00:04:09.868 Library rpcrt4 found: NO 00:04:09.868 Library rt found: YES 00:04:09.868 Checking for function "clock_gettime" with dependency -lrt: YES 00:04:09.868 Found CMake: /usr/bin/cmake (3.27.7) 00:04:09.868 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:04:09.868 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:04:09.868 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:04:09.868 Build targets in project: 32 00:04:09.868 00:04:09.868 xnvme 0.7.3 00:04:09.868 00:04:09.868 User defined options 00:04:09.868 with-libaio : enabled 00:04:09.868 with-liburing: enabled 00:04:09.868 with-libvfn : disabled 00:04:09.868 with-spdk : false 00:04:09.868 00:04:09.868 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:10.439 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:04:10.439 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:04:10.439 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:04:10.439 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:04:10.697 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:04:10.697 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:04:10.697 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:04:10.697 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:04:10.697 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:04:10.697 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:04:10.697 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:04:10.697 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:04:10.697 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:04:10.697 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:04:10.956 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:04:10.956 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:04:10.956 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:04:10.956 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:04:10.956 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:04:10.956 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:04:10.956 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:04:10.956 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:04:10.956 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:04:10.956 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:04:11.249 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:04:11.249 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:04:11.249 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:04:11.249 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:04:11.249 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:04:11.249 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:04:11.249 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:04:11.249 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:04:11.249 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:04:11.249 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:04:11.249 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:04:11.249 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:04:11.249 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:04:11.510 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:04:11.510 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:04:11.510 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:04:11.510 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:04:11.510 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:04:11.510 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:04:11.510 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:04:11.510 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:04:11.510 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:04:11.510 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:04:11.510 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:04:11.510 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:04:11.510 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:04:11.510 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:04:11.510 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:04:11.510 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:04:11.768 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:04:11.768 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:04:11.768 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:04:11.768 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:04:11.769 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:04:11.769 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:04:11.769 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:04:11.769 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:04:11.769 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:04:12.027 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:04:12.027 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:04:12.027 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:04:12.027 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:04:12.027 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:04:12.027 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:04:12.027 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:04:12.027 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:04:12.027 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:04:12.027 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:04:12.285 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:04:12.285 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:04:12.285 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:04:12.285 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:04:12.285 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:04:12.285 [77/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:04:12.285 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:04:12.285 [79/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:04:12.285 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:04:12.285 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:04:12.543 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:04:12.543 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:04:12.802 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:04:12.802 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:04:12.802 [86/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:04:12.802 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:04:12.803 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:04:12.803 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:04:12.803 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:04:12.803 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:04:12.803 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:04:12.803 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:04:12.803 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:04:12.803 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:04:12.803 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:04:13.062 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:04:13.062 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:04:13.062 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:04:13.062 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:04:13.062 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:04:13.062 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:04:13.062 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:04:13.062 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:04:13.062 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:04:13.062 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:04:13.062 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:04:13.062 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:04:13.062 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:04:13.062 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:04:13.062 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:04:13.062 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:04:13.321 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:04:13.321 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:04:13.321 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:04:13.321 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:04:13.321 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:04:13.321 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:04:13.321 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:04:13.321 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:04:13.321 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:04:13.321 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:04:13.321 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:04:13.579 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:04:13.579 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:04:13.579 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:04:13.579 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:04:13.579 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:04:13.579 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:04:13.579 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:04:13.579 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:04:13.579 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:04:13.579 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:04:13.837 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:04:13.837 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:04:13.837 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:04:13.837 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:04:13.837 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:04:13.837 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:04:13.837 [140/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:04:13.837 [141/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:04:13.837 [142/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:04:14.097 [143/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:04:14.097 [144/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:04:14.097 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:04:14.097 [146/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:04:14.097 [147/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:04:14.097 [148/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:04:14.356 [149/203] Linking target lib/libxnvme.so 00:04:14.356 [150/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:04:14.356 [151/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:04:14.356 [152/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:04:14.356 [153/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:04:14.356 [154/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:04:14.356 [155/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:04:14.356 [156/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:04:14.614 [157/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:04:14.614 [158/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:04:14.614 [159/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:04:14.614 [160/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:04:14.614 [161/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:04:14.614 [162/203] Compiling C object tools/lblk.p/lblk.c.o 00:04:14.614 [163/203] Compiling C object tools/xdd.p/xdd.c.o 00:04:14.873 [164/203] Compiling C object tools/kvs.p/kvs.c.o 00:04:14.873 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:04:14.873 [166/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:04:14.873 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:04:14.873 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:04:14.873 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:04:14.873 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:04:15.131 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:04:15.131 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:04:15.390 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:04:15.649 [174/203] Linking static target lib/libxnvme.a 00:04:15.649 [175/203] Linking target tests/xnvme_tests_buf 00:04:15.649 [176/203] Linking target tests/xnvme_tests_async_intf 00:04:15.649 [177/203] Linking target tests/xnvme_tests_scc 00:04:15.649 [178/203] Linking target tests/xnvme_tests_ioworker 00:04:15.649 [179/203] Linking target tests/xnvme_tests_xnvme_file 00:04:15.649 [180/203] Linking target tests/xnvme_tests_xnvme_cli 00:04:15.649 [181/203] Linking target tests/xnvme_tests_enum 00:04:15.649 [182/203] Linking target tests/xnvme_tests_znd_append 00:04:15.649 [183/203] Linking target tests/xnvme_tests_cli 00:04:15.649 [184/203] Linking target tests/xnvme_tests_znd_explicit_open 00:04:15.649 [185/203] Linking target tests/xnvme_tests_znd_state 00:04:15.907 [186/203] Linking target tests/xnvme_tests_kvs 00:04:15.907 [187/203] Linking target tests/xnvme_tests_znd_zrwa 00:04:15.907 [188/203] Linking target tools/lblk 00:04:15.907 [189/203] Linking target tests/xnvme_tests_map 00:04:15.907 [190/203] Linking target tools/xdd 00:04:15.907 [191/203] Linking target tools/zoned 00:04:15.907 [192/203] Linking target tests/xnvme_tests_lblk 00:04:15.907 [193/203] Linking target tools/xnvme_file 00:04:15.907 [194/203] Linking target tools/kvs 00:04:15.907 [195/203] Linking target examples/xnvme_enum 00:04:15.907 [196/203] Linking target examples/xnvme_dev 00:04:15.907 [197/203] Linking target examples/xnvme_io_async 00:04:15.907 [198/203] Linking target examples/zoned_io_async 00:04:15.907 [199/203] Linking target examples/xnvme_hello 00:04:15.907 [200/203] Linking target examples/xnvme_single_async 00:04:15.907 [201/203] Linking target examples/xnvme_single_sync 00:04:15.907 [202/203] Linking target tools/xnvme 00:04:15.907 [203/203] Linking target examples/zoned_io_sync 00:04:15.907 INFO: autodetecting backend as ninja 00:04:15.907 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:16.165 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:31.050 The Meson build system 00:04:31.050 Version: 1.3.1 00:04:31.050 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:04:31.050 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:31.050 Build type: native build 00:04:31.050 Program cat found: YES (/usr/bin/cat) 00:04:31.050 Project name: DPDK 00:04:31.050 Project version: 24.03.0 00:04:31.050 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:04:31.050 C linker for the host machine: cc ld.bfd 2.39-16 00:04:31.050 Host machine cpu family: x86_64 00:04:31.050 Host machine cpu: x86_64 00:04:31.050 Message: ## Building in Developer Mode ## 00:04:31.050 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:31.050 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:31.050 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:31.050 Program python3 found: YES (/usr/bin/python3) 00:04:31.050 Program cat found: YES (/usr/bin/cat) 00:04:31.050 Compiler for C supports arguments -march=native: YES 00:04:31.050 Checking for size of "void *" : 8 00:04:31.050 Checking for size of "void *" : 8 (cached) 00:04:31.050 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:04:31.050 Library m found: YES 00:04:31.050 Library numa found: YES 00:04:31.050 Has header "numaif.h" : YES 00:04:31.050 Library fdt found: NO 00:04:31.050 Library execinfo found: NO 00:04:31.050 Has header "execinfo.h" : YES 00:04:31.050 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:04:31.050 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:31.050 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:31.050 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:31.050 Run-time dependency openssl found: YES 3.0.9 00:04:31.050 Run-time dependency libpcap found: YES 1.10.4 00:04:31.050 Has header "pcap.h" with dependency libpcap: YES 00:04:31.050 Compiler for C supports arguments -Wcast-qual: YES 00:04:31.050 Compiler for C supports arguments -Wdeprecated: YES 00:04:31.050 Compiler for C supports arguments -Wformat: YES 00:04:31.050 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:31.050 Compiler for C supports arguments -Wformat-security: NO 00:04:31.050 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:31.050 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:31.050 Compiler for C supports arguments -Wnested-externs: YES 00:04:31.050 Compiler for C supports arguments -Wold-style-definition: YES 00:04:31.050 Compiler for C supports arguments -Wpointer-arith: YES 00:04:31.050 Compiler for C supports arguments -Wsign-compare: YES 00:04:31.050 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:31.050 Compiler for C supports arguments -Wundef: YES 00:04:31.050 Compiler for C supports arguments -Wwrite-strings: YES 00:04:31.050 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:31.050 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:31.050 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:31.050 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:31.050 Program objdump found: YES (/usr/bin/objdump) 00:04:31.050 Compiler for C supports arguments -mavx512f: YES 00:04:31.050 Checking if "AVX512 checking" compiles: YES 00:04:31.050 Fetching value of define "__SSE4_2__" : 1 00:04:31.050 Fetching value of define "__AES__" : 1 00:04:31.050 Fetching value of define "__AVX__" : 1 00:04:31.050 Fetching value of define "__AVX2__" : 1 00:04:31.050 Fetching value of define "__AVX512BW__" : (undefined) 00:04:31.050 Fetching value of define "__AVX512CD__" : (undefined) 00:04:31.050 Fetching value of define "__AVX512DQ__" : (undefined) 00:04:31.050 Fetching value of define "__AVX512F__" : (undefined) 00:04:31.050 Fetching value of define "__AVX512VL__" : (undefined) 00:04:31.050 Fetching value of define "__PCLMUL__" : 1 00:04:31.050 Fetching value of define "__RDRND__" : 1 00:04:31.050 Fetching value of define "__RDSEED__" : 1 00:04:31.050 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:31.050 Fetching value of define "__znver1__" : (undefined) 00:04:31.050 Fetching value of define "__znver2__" : (undefined) 00:04:31.050 Fetching value of define "__znver3__" : (undefined) 00:04:31.050 Fetching value of define "__znver4__" : (undefined) 00:04:31.050 Library asan found: YES 00:04:31.050 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:31.050 Message: lib/log: Defining dependency "log" 00:04:31.051 Message: lib/kvargs: Defining dependency "kvargs" 00:04:31.051 Message: lib/telemetry: Defining dependency "telemetry" 00:04:31.051 Library rt found: YES 00:04:31.051 Checking for function "getentropy" : NO 00:04:31.051 Message: lib/eal: Defining dependency "eal" 00:04:31.051 Message: lib/ring: Defining dependency "ring" 00:04:31.051 Message: lib/rcu: Defining dependency "rcu" 00:04:31.051 Message: lib/mempool: Defining dependency "mempool" 00:04:31.051 Message: lib/mbuf: Defining dependency "mbuf" 00:04:31.051 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:31.051 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:31.051 Compiler for C supports arguments -mpclmul: YES 00:04:31.051 Compiler for C supports arguments -maes: YES 00:04:31.051 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:31.051 Compiler for C supports arguments -mavx512bw: YES 00:04:31.051 Compiler for C supports arguments -mavx512dq: YES 00:04:31.051 Compiler for C supports arguments -mavx512vl: YES 00:04:31.051 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:31.051 Compiler for C supports arguments -mavx2: YES 00:04:31.051 Compiler for C supports arguments -mavx: YES 00:04:31.051 Message: lib/net: Defining dependency "net" 00:04:31.051 Message: lib/meter: Defining dependency "meter" 00:04:31.051 Message: lib/ethdev: Defining dependency "ethdev" 00:04:31.051 Message: lib/pci: Defining dependency "pci" 00:04:31.051 Message: lib/cmdline: Defining dependency "cmdline" 00:04:31.051 Message: lib/hash: Defining dependency "hash" 00:04:31.051 Message: lib/timer: Defining dependency "timer" 00:04:31.051 Message: lib/compressdev: Defining dependency "compressdev" 00:04:31.051 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:31.051 Message: lib/dmadev: Defining dependency "dmadev" 00:04:31.051 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:31.051 Message: lib/power: Defining dependency "power" 00:04:31.051 Message: lib/reorder: Defining dependency "reorder" 00:04:31.051 Message: lib/security: Defining dependency "security" 00:04:31.051 Has header "linux/userfaultfd.h" : YES 00:04:31.051 Has header "linux/vduse.h" : YES 00:04:31.051 Message: lib/vhost: Defining dependency "vhost" 00:04:31.051 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:31.051 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:31.051 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:31.051 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:31.051 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:31.051 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:31.051 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:31.051 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:31.051 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:31.051 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:31.051 Program doxygen found: YES (/usr/bin/doxygen) 00:04:31.051 Configuring doxy-api-html.conf using configuration 00:04:31.051 Configuring doxy-api-man.conf using configuration 00:04:31.051 Program mandb found: YES (/usr/bin/mandb) 00:04:31.051 Program sphinx-build found: NO 00:04:31.051 Configuring rte_build_config.h using configuration 00:04:31.051 Message: 00:04:31.051 ================= 00:04:31.051 Applications Enabled 00:04:31.051 ================= 00:04:31.051 00:04:31.051 apps: 00:04:31.051 00:04:31.051 00:04:31.051 Message: 00:04:31.051 ================= 00:04:31.051 Libraries Enabled 00:04:31.051 ================= 00:04:31.051 00:04:31.051 libs: 00:04:31.051 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:31.051 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:31.051 cryptodev, dmadev, power, reorder, security, vhost, 00:04:31.051 00:04:31.051 Message: 00:04:31.051 =============== 00:04:31.051 Drivers Enabled 00:04:31.051 =============== 00:04:31.051 00:04:31.051 common: 00:04:31.051 00:04:31.051 bus: 00:04:31.051 pci, vdev, 00:04:31.051 mempool: 00:04:31.051 ring, 00:04:31.051 dma: 00:04:31.051 00:04:31.051 net: 00:04:31.051 00:04:31.051 crypto: 00:04:31.051 00:04:31.051 compress: 00:04:31.051 00:04:31.051 vdpa: 00:04:31.051 00:04:31.051 00:04:31.051 Message: 00:04:31.051 ================= 00:04:31.051 Content Skipped 00:04:31.051 ================= 00:04:31.051 00:04:31.051 apps: 00:04:31.051 dumpcap: explicitly disabled via build config 00:04:31.051 graph: explicitly disabled via build config 00:04:31.051 pdump: explicitly disabled via build config 00:04:31.051 proc-info: explicitly disabled via build config 00:04:31.051 test-acl: explicitly disabled via build config 00:04:31.051 test-bbdev: explicitly disabled via build config 00:04:31.051 test-cmdline: explicitly disabled via build config 00:04:31.051 test-compress-perf: explicitly disabled via build config 00:04:31.051 test-crypto-perf: explicitly disabled via build config 00:04:31.051 test-dma-perf: explicitly disabled via build config 00:04:31.051 test-eventdev: explicitly disabled via build config 00:04:31.051 test-fib: explicitly disabled via build config 00:04:31.051 test-flow-perf: explicitly disabled via build config 00:04:31.051 test-gpudev: explicitly disabled via build config 00:04:31.051 test-mldev: explicitly disabled via build config 00:04:31.051 test-pipeline: explicitly disabled via build config 00:04:31.051 test-pmd: explicitly disabled via build config 00:04:31.051 test-regex: explicitly disabled via build config 00:04:31.051 test-sad: explicitly disabled via build config 00:04:31.051 test-security-perf: explicitly disabled via build config 00:04:31.051 00:04:31.051 libs: 00:04:31.051 argparse: explicitly disabled via build config 00:04:31.051 metrics: explicitly disabled via build config 00:04:31.051 acl: explicitly disabled via build config 00:04:31.051 bbdev: explicitly disabled via build config 00:04:31.051 bitratestats: explicitly disabled via build config 00:04:31.051 bpf: explicitly disabled via build config 00:04:31.051 cfgfile: explicitly disabled via build config 00:04:31.051 distributor: explicitly disabled via build config 00:04:31.051 efd: explicitly disabled via build config 00:04:31.051 eventdev: explicitly disabled via build config 00:04:31.051 dispatcher: explicitly disabled via build config 00:04:31.051 gpudev: explicitly disabled via build config 00:04:31.051 gro: explicitly disabled via build config 00:04:31.051 gso: explicitly disabled via build config 00:04:31.051 ip_frag: explicitly disabled via build config 00:04:31.051 jobstats: explicitly disabled via build config 00:04:31.051 latencystats: explicitly disabled via build config 00:04:31.051 lpm: explicitly disabled via build config 00:04:31.051 member: explicitly disabled via build config 00:04:31.051 pcapng: explicitly disabled via build config 00:04:31.051 rawdev: explicitly disabled via build config 00:04:31.051 regexdev: explicitly disabled via build config 00:04:31.051 mldev: explicitly disabled via build config 00:04:31.051 rib: explicitly disabled via build config 00:04:31.051 sched: explicitly disabled via build config 00:04:31.051 stack: explicitly disabled via build config 00:04:31.051 ipsec: explicitly disabled via build config 00:04:31.051 pdcp: explicitly disabled via build config 00:04:31.051 fib: explicitly disabled via build config 00:04:31.051 port: explicitly disabled via build config 00:04:31.051 pdump: explicitly disabled via build config 00:04:31.051 table: explicitly disabled via build config 00:04:31.051 pipeline: explicitly disabled via build config 00:04:31.051 graph: explicitly disabled via build config 00:04:31.051 node: explicitly disabled via build config 00:04:31.051 00:04:31.051 drivers: 00:04:31.051 common/cpt: not in enabled drivers build config 00:04:31.051 common/dpaax: not in enabled drivers build config 00:04:31.051 common/iavf: not in enabled drivers build config 00:04:31.051 common/idpf: not in enabled drivers build config 00:04:31.051 common/ionic: not in enabled drivers build config 00:04:31.051 common/mvep: not in enabled drivers build config 00:04:31.051 common/octeontx: not in enabled drivers build config 00:04:31.051 bus/auxiliary: not in enabled drivers build config 00:04:31.051 bus/cdx: not in enabled drivers build config 00:04:31.051 bus/dpaa: not in enabled drivers build config 00:04:31.051 bus/fslmc: not in enabled drivers build config 00:04:31.051 bus/ifpga: not in enabled drivers build config 00:04:31.051 bus/platform: not in enabled drivers build config 00:04:31.051 bus/uacce: not in enabled drivers build config 00:04:31.051 bus/vmbus: not in enabled drivers build config 00:04:31.051 common/cnxk: not in enabled drivers build config 00:04:31.051 common/mlx5: not in enabled drivers build config 00:04:31.051 common/nfp: not in enabled drivers build config 00:04:31.051 common/nitrox: not in enabled drivers build config 00:04:31.051 common/qat: not in enabled drivers build config 00:04:31.051 common/sfc_efx: not in enabled drivers build config 00:04:31.051 mempool/bucket: not in enabled drivers build config 00:04:31.051 mempool/cnxk: not in enabled drivers build config 00:04:31.051 mempool/dpaa: not in enabled drivers build config 00:04:31.051 mempool/dpaa2: not in enabled drivers build config 00:04:31.051 mempool/octeontx: not in enabled drivers build config 00:04:31.051 mempool/stack: not in enabled drivers build config 00:04:31.051 dma/cnxk: not in enabled drivers build config 00:04:31.051 dma/dpaa: not in enabled drivers build config 00:04:31.051 dma/dpaa2: not in enabled drivers build config 00:04:31.051 dma/hisilicon: not in enabled drivers build config 00:04:31.051 dma/idxd: not in enabled drivers build config 00:04:31.051 dma/ioat: not in enabled drivers build config 00:04:31.051 dma/skeleton: not in enabled drivers build config 00:04:31.051 net/af_packet: not in enabled drivers build config 00:04:31.051 net/af_xdp: not in enabled drivers build config 00:04:31.051 net/ark: not in enabled drivers build config 00:04:31.051 net/atlantic: not in enabled drivers build config 00:04:31.051 net/avp: not in enabled drivers build config 00:04:31.051 net/axgbe: not in enabled drivers build config 00:04:31.051 net/bnx2x: not in enabled drivers build config 00:04:31.051 net/bnxt: not in enabled drivers build config 00:04:31.051 net/bonding: not in enabled drivers build config 00:04:31.051 net/cnxk: not in enabled drivers build config 00:04:31.051 net/cpfl: not in enabled drivers build config 00:04:31.051 net/cxgbe: not in enabled drivers build config 00:04:31.051 net/dpaa: not in enabled drivers build config 00:04:31.051 net/dpaa2: not in enabled drivers build config 00:04:31.051 net/e1000: not in enabled drivers build config 00:04:31.051 net/ena: not in enabled drivers build config 00:04:31.051 net/enetc: not in enabled drivers build config 00:04:31.051 net/enetfec: not in enabled drivers build config 00:04:31.051 net/enic: not in enabled drivers build config 00:04:31.052 net/failsafe: not in enabled drivers build config 00:04:31.052 net/fm10k: not in enabled drivers build config 00:04:31.052 net/gve: not in enabled drivers build config 00:04:31.052 net/hinic: not in enabled drivers build config 00:04:31.052 net/hns3: not in enabled drivers build config 00:04:31.052 net/i40e: not in enabled drivers build config 00:04:31.052 net/iavf: not in enabled drivers build config 00:04:31.052 net/ice: not in enabled drivers build config 00:04:31.052 net/idpf: not in enabled drivers build config 00:04:31.052 net/igc: not in enabled drivers build config 00:04:31.052 net/ionic: not in enabled drivers build config 00:04:31.052 net/ipn3ke: not in enabled drivers build config 00:04:31.052 net/ixgbe: not in enabled drivers build config 00:04:31.052 net/mana: not in enabled drivers build config 00:04:31.052 net/memif: not in enabled drivers build config 00:04:31.052 net/mlx4: not in enabled drivers build config 00:04:31.052 net/mlx5: not in enabled drivers build config 00:04:31.052 net/mvneta: not in enabled drivers build config 00:04:31.052 net/mvpp2: not in enabled drivers build config 00:04:31.052 net/netvsc: not in enabled drivers build config 00:04:31.052 net/nfb: not in enabled drivers build config 00:04:31.052 net/nfp: not in enabled drivers build config 00:04:31.052 net/ngbe: not in enabled drivers build config 00:04:31.052 net/null: not in enabled drivers build config 00:04:31.052 net/octeontx: not in enabled drivers build config 00:04:31.052 net/octeon_ep: not in enabled drivers build config 00:04:31.052 net/pcap: not in enabled drivers build config 00:04:31.052 net/pfe: not in enabled drivers build config 00:04:31.052 net/qede: not in enabled drivers build config 00:04:31.052 net/ring: not in enabled drivers build config 00:04:31.052 net/sfc: not in enabled drivers build config 00:04:31.052 net/softnic: not in enabled drivers build config 00:04:31.052 net/tap: not in enabled drivers build config 00:04:31.052 net/thunderx: not in enabled drivers build config 00:04:31.052 net/txgbe: not in enabled drivers build config 00:04:31.052 net/vdev_netvsc: not in enabled drivers build config 00:04:31.052 net/vhost: not in enabled drivers build config 00:04:31.052 net/virtio: not in enabled drivers build config 00:04:31.052 net/vmxnet3: not in enabled drivers build config 00:04:31.052 raw/*: missing internal dependency, "rawdev" 00:04:31.052 crypto/armv8: not in enabled drivers build config 00:04:31.052 crypto/bcmfs: not in enabled drivers build config 00:04:31.052 crypto/caam_jr: not in enabled drivers build config 00:04:31.052 crypto/ccp: not in enabled drivers build config 00:04:31.052 crypto/cnxk: not in enabled drivers build config 00:04:31.052 crypto/dpaa_sec: not in enabled drivers build config 00:04:31.052 crypto/dpaa2_sec: not in enabled drivers build config 00:04:31.052 crypto/ipsec_mb: not in enabled drivers build config 00:04:31.052 crypto/mlx5: not in enabled drivers build config 00:04:31.052 crypto/mvsam: not in enabled drivers build config 00:04:31.052 crypto/nitrox: not in enabled drivers build config 00:04:31.052 crypto/null: not in enabled drivers build config 00:04:31.052 crypto/octeontx: not in enabled drivers build config 00:04:31.052 crypto/openssl: not in enabled drivers build config 00:04:31.052 crypto/scheduler: not in enabled drivers build config 00:04:31.052 crypto/uadk: not in enabled drivers build config 00:04:31.052 crypto/virtio: not in enabled drivers build config 00:04:31.052 compress/isal: not in enabled drivers build config 00:04:31.052 compress/mlx5: not in enabled drivers build config 00:04:31.052 compress/nitrox: not in enabled drivers build config 00:04:31.052 compress/octeontx: not in enabled drivers build config 00:04:31.052 compress/zlib: not in enabled drivers build config 00:04:31.052 regex/*: missing internal dependency, "regexdev" 00:04:31.052 ml/*: missing internal dependency, "mldev" 00:04:31.052 vdpa/ifc: not in enabled drivers build config 00:04:31.052 vdpa/mlx5: not in enabled drivers build config 00:04:31.052 vdpa/nfp: not in enabled drivers build config 00:04:31.052 vdpa/sfc: not in enabled drivers build config 00:04:31.052 event/*: missing internal dependency, "eventdev" 00:04:31.052 baseband/*: missing internal dependency, "bbdev" 00:04:31.052 gpu/*: missing internal dependency, "gpudev" 00:04:31.052 00:04:31.052 00:04:31.052 Build targets in project: 85 00:04:31.052 00:04:31.052 DPDK 24.03.0 00:04:31.052 00:04:31.052 User defined options 00:04:31.052 buildtype : debug 00:04:31.052 default_library : shared 00:04:31.052 libdir : lib 00:04:31.052 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:31.052 b_sanitize : address 00:04:31.052 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:31.052 c_link_args : 00:04:31.052 cpu_instruction_set: native 00:04:31.052 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:04:31.052 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:04:31.052 enable_docs : false 00:04:31.052 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:04:31.052 enable_kmods : false 00:04:31.052 max_lcores : 128 00:04:31.052 tests : false 00:04:31.052 00:04:31.052 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:32.064 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:32.064 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:32.065 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:32.065 [3/268] Linking static target lib/librte_kvargs.a 00:04:32.329 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:32.329 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:32.329 [6/268] Linking static target lib/librte_log.a 00:04:33.265 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:33.265 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:33.265 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:33.523 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:33.523 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:33.523 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:33.523 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:33.783 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:33.783 [15/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:33.783 [16/268] Linking target lib/librte_log.so.24.1 00:04:34.041 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:34.041 [18/268] Linking static target lib/librte_telemetry.a 00:04:34.041 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:34.041 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:34.300 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:34.558 [22/268] Linking target lib/librte_kvargs.so.24.1 00:04:34.816 [23/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:35.382 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:35.382 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:35.382 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:35.382 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:35.641 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:35.641 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:35.641 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:35.898 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:35.898 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:36.233 [33/268] Linking target lib/librte_telemetry.so.24.1 00:04:36.504 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:36.504 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:36.762 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:36.762 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:37.020 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:37.020 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:37.278 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:37.278 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:37.278 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:37.536 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:37.536 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:37.793 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:38.050 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:38.311 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:38.311 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:38.879 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:38.879 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:38.879 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:39.143 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:39.401 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:39.401 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:39.401 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:39.660 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:39.660 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:39.919 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:39.919 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:40.177 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:40.435 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:40.729 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:40.729 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:41.029 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:41.029 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:41.029 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:41.597 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:41.857 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:41.857 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:41.857 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:42.116 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:42.116 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:42.116 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:42.375 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:42.375 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:42.633 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:42.892 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:43.460 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:43.460 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:43.460 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:43.460 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:43.719 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:43.719 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:43.977 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:43.977 [85/268] Linking static target lib/librte_ring.a 00:04:44.255 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:44.255 [87/268] Linking static target lib/librte_eal.a 00:04:44.857 [88/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:45.129 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:45.129 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:45.129 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:45.129 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:45.387 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:45.387 [94/268] Linking static target lib/librte_mempool.a 00:04:45.718 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:45.718 [96/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:45.718 [97/268] Linking static target lib/librte_rcu.a 00:04:46.657 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:46.657 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:46.657 [100/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:46.913 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:46.913 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:47.171 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:47.171 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:47.735 [105/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:47.735 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:47.735 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:47.735 [108/268] Linking static target lib/librte_net.a 00:04:47.993 [109/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:47.993 [110/268] Linking static target lib/librte_meter.a 00:04:48.251 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:48.251 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:48.507 [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:48.507 [114/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:48.507 [115/268] Linking static target lib/librte_mbuf.a 00:04:48.507 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:48.764 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:49.021 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:49.279 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:49.843 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:49.843 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:49.843 [122/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:50.101 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:50.666 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:50.666 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:50.666 [126/268] Linking static target lib/librte_pci.a 00:04:50.923 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:50.923 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:50.923 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:51.181 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:51.181 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:51.181 [132/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:51.181 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:51.439 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:51.439 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:51.439 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:51.439 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:51.439 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:51.439 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:51.697 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:51.697 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:51.697 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:51.697 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:51.697 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:51.697 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:52.263 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:52.828 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:52.828 [148/268] Linking static target lib/librte_cmdline.a 00:04:52.828 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:52.828 [150/268] Linking static target lib/librte_timer.a 00:04:53.086 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:53.086 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:53.086 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:53.086 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:53.345 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:53.605 [156/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:54.170 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:54.170 [158/268] Linking static target lib/librte_ethdev.a 00:04:54.427 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:54.427 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:54.427 [161/268] Linking static target lib/librte_compressdev.a 00:04:54.684 [162/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:54.684 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:54.684 [164/268] Linking static target lib/librte_hash.a 00:04:54.684 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:54.684 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:54.968 [167/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:54.968 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:54.968 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:54.968 [170/268] Linking static target lib/librte_dmadev.a 00:04:55.233 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:55.799 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:55.799 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:56.058 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:56.316 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:56.316 [176/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:56.316 [177/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:56.574 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:56.832 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:57.089 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:57.089 [181/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:57.089 [182/268] Linking static target lib/librte_cryptodev.a 00:04:57.089 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:57.346 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:57.604 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:57.862 [186/268] Linking static target lib/librte_power.a 00:04:57.863 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:57.863 [188/268] Linking static target lib/librte_reorder.a 00:04:58.122 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:58.381 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:58.639 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:58.639 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:58.639 [193/268] Linking static target lib/librte_security.a 00:04:58.897 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:59.462 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:59.462 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:59.720 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:59.978 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:59.978 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:00.237 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:00.237 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:00.494 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:00.494 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:00.753 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:00.753 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:00.753 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:01.011 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:01.270 [208/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.270 [209/268] Linking target lib/librte_eal.so.24.1 00:05:01.528 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:01.528 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:01.528 [212/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:01.528 [213/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:01.528 [214/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:01.528 [215/268] Linking target lib/librte_timer.so.24.1 00:05:01.528 [216/268] Linking target lib/librte_ring.so.24.1 00:05:01.528 [217/268] Linking target lib/librte_dmadev.so.24.1 00:05:01.528 [218/268] Linking target lib/librte_meter.so.24.1 00:05:01.528 [219/268] Linking target lib/librte_pci.so.24.1 00:05:01.786 [220/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:01.786 [221/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:01.786 [222/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:01.786 [223/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:01.786 [224/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:01.786 [225/268] Linking target lib/librte_rcu.so.24.1 00:05:01.786 [226/268] Linking target lib/librte_mempool.so.24.1 00:05:01.786 [227/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:01.786 [228/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:01.786 [229/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:01.786 [230/268] Linking static target drivers/librte_bus_pci.a 00:05:02.045 [231/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:02.045 [232/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:02.045 [233/268] Linking target lib/librte_mbuf.so.24.1 00:05:02.045 [234/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:02.045 [235/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:02.045 [236/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:02.045 [237/268] Linking static target drivers/librte_bus_vdev.a 00:05:02.045 [238/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:02.045 [239/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:02.303 [240/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:02.303 [241/268] Linking target lib/librte_compressdev.so.24.1 00:05:02.303 [242/268] Linking target lib/librte_reorder.so.24.1 00:05:02.303 [243/268] Linking target lib/librte_cryptodev.so.24.1 00:05:02.303 [244/268] Linking target lib/librte_net.so.24.1 00:05:02.303 [245/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:02.562 [246/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:02.562 [247/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:02.562 [248/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.562 [249/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:02.562 [250/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:02.562 [251/268] Linking static target drivers/librte_mempool_ring.a 00:05:02.562 [252/268] Linking target lib/librte_cmdline.so.24.1 00:05:02.562 [253/268] Linking target lib/librte_hash.so.24.1 00:05:02.562 [254/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:02.562 [255/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:02.562 [256/268] Linking target lib/librte_security.so.24.1 00:05:02.562 [257/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.562 [258/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:02.562 [259/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:04.462 [260/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:04.462 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.462 [262/268] Linking target lib/librte_ethdev.so.24.1 00:05:04.462 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:04.462 [264/268] Linking target lib/librte_power.so.24.1 00:05:08.731 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:08.731 [266/268] Linking static target lib/librte_vhost.a 00:05:09.664 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:09.664 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:09.664 INFO: autodetecting backend as ninja 00:05:09.664 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:11.036 CC lib/ut_mock/mock.o 00:05:11.036 CC lib/ut/ut.o 00:05:11.036 CC lib/log/log.o 00:05:11.036 CC lib/log/log_flags.o 00:05:11.036 CC lib/log/log_deprecated.o 00:05:11.036 LIB libspdk_ut_mock.a 00:05:11.294 SO libspdk_ut_mock.so.6.0 00:05:11.294 LIB libspdk_ut.a 00:05:11.294 LIB libspdk_log.a 00:05:11.294 SO libspdk_ut.so.2.0 00:05:11.294 SO libspdk_log.so.7.0 00:05:11.294 SYMLINK libspdk_ut_mock.so 00:05:11.294 SYMLINK libspdk_ut.so 00:05:11.294 SYMLINK libspdk_log.so 00:05:11.552 CC lib/ioat/ioat.o 00:05:11.552 CC lib/util/base64.o 00:05:11.552 CXX lib/trace_parser/trace.o 00:05:11.552 CC lib/util/bit_array.o 00:05:11.552 CC lib/dma/dma.o 00:05:11.552 CC lib/util/cpuset.o 00:05:11.552 CC lib/util/crc32.o 00:05:11.552 CC lib/util/crc16.o 00:05:11.552 CC lib/util/crc32c.o 00:05:12.162 CC lib/util/crc32_ieee.o 00:05:12.162 CC lib/util/crc64.o 00:05:12.162 CC lib/vfio_user/host/vfio_user_pci.o 00:05:12.162 CC lib/vfio_user/host/vfio_user.o 00:05:12.162 CC lib/util/dif.o 00:05:12.162 LIB libspdk_dma.a 00:05:12.162 CC lib/util/fd.o 00:05:12.162 SO libspdk_dma.so.4.0 00:05:12.162 CC lib/util/file.o 00:05:12.162 CC lib/util/hexlify.o 00:05:12.162 CC lib/util/iov.o 00:05:12.162 SYMLINK libspdk_dma.so 00:05:12.420 CC lib/util/math.o 00:05:12.420 LIB libspdk_ioat.a 00:05:12.420 CC lib/util/pipe.o 00:05:12.420 SO libspdk_ioat.so.7.0 00:05:12.420 CC lib/util/strerror_tls.o 00:05:12.420 CC lib/util/string.o 00:05:12.678 SYMLINK libspdk_ioat.so 00:05:12.678 CC lib/util/uuid.o 00:05:12.678 LIB libspdk_vfio_user.a 00:05:12.678 CC lib/util/fd_group.o 00:05:12.678 SO libspdk_vfio_user.so.5.0 00:05:12.678 CC lib/util/xor.o 00:05:12.678 CC lib/util/zipf.o 00:05:12.678 SYMLINK libspdk_vfio_user.so 00:05:13.243 LIB libspdk_util.a 00:05:13.501 SO libspdk_util.so.9.1 00:05:13.758 SYMLINK libspdk_util.so 00:05:13.758 LIB libspdk_trace_parser.a 00:05:14.029 SO libspdk_trace_parser.so.5.0 00:05:14.029 CC lib/json/json_util.o 00:05:14.029 CC lib/json/json_write.o 00:05:14.029 CC lib/json/json_parse.o 00:05:14.029 CC lib/rdma_utils/rdma_utils.o 00:05:14.029 CC lib/idxd/idxd.o 00:05:14.029 CC lib/vmd/vmd.o 00:05:14.029 CC lib/rdma_provider/common.o 00:05:14.029 CC lib/conf/conf.o 00:05:14.029 CC lib/env_dpdk/env.o 00:05:14.029 SYMLINK libspdk_trace_parser.so 00:05:14.029 CC lib/env_dpdk/memory.o 00:05:14.299 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:14.299 LIB libspdk_conf.a 00:05:14.299 SO libspdk_conf.so.6.0 00:05:14.556 LIB libspdk_rdma_utils.a 00:05:14.556 SYMLINK libspdk_conf.so 00:05:14.556 CC lib/env_dpdk/pci.o 00:05:14.556 CC lib/vmd/led.o 00:05:14.556 SO libspdk_rdma_utils.so.1.0 00:05:14.556 LIB libspdk_json.a 00:05:14.556 CC lib/env_dpdk/init.o 00:05:14.556 SO libspdk_json.so.6.0 00:05:14.556 SYMLINK libspdk_rdma_utils.so 00:05:14.556 CC lib/env_dpdk/threads.o 00:05:14.828 LIB libspdk_rdma_provider.a 00:05:14.828 CC lib/idxd/idxd_user.o 00:05:14.828 SYMLINK libspdk_json.so 00:05:14.828 CC lib/idxd/idxd_kernel.o 00:05:14.828 SO libspdk_rdma_provider.so.6.0 00:05:15.089 SYMLINK libspdk_rdma_provider.so 00:05:15.089 CC lib/env_dpdk/pci_ioat.o 00:05:15.089 CC lib/env_dpdk/pci_virtio.o 00:05:15.361 CC lib/env_dpdk/pci_vmd.o 00:05:15.361 CC lib/env_dpdk/pci_idxd.o 00:05:15.361 CC lib/env_dpdk/pci_event.o 00:05:15.361 CC lib/jsonrpc/jsonrpc_server.o 00:05:15.361 LIB libspdk_idxd.a 00:05:15.361 CC lib/env_dpdk/sigbus_handler.o 00:05:15.625 SO libspdk_idxd.so.12.0 00:05:15.625 LIB libspdk_vmd.a 00:05:15.625 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:15.625 CC lib/env_dpdk/pci_dpdk.o 00:05:15.625 SO libspdk_vmd.so.6.0 00:05:15.625 CC lib/jsonrpc/jsonrpc_client.o 00:05:15.625 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:15.625 SYMLINK libspdk_idxd.so 00:05:15.625 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:15.625 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:15.884 SYMLINK libspdk_vmd.so 00:05:16.142 LIB libspdk_jsonrpc.a 00:05:16.142 SO libspdk_jsonrpc.so.6.0 00:05:16.400 SYMLINK libspdk_jsonrpc.so 00:05:16.658 CC lib/rpc/rpc.o 00:05:16.916 LIB libspdk_rpc.a 00:05:16.916 SO libspdk_rpc.so.6.0 00:05:16.916 SYMLINK libspdk_rpc.so 00:05:17.174 CC lib/trace/trace.o 00:05:17.174 CC lib/trace/trace_flags.o 00:05:17.174 CC lib/notify/notify.o 00:05:17.174 CC lib/trace/trace_rpc.o 00:05:17.174 CC lib/notify/notify_rpc.o 00:05:17.174 CC lib/keyring/keyring.o 00:05:17.174 CC lib/keyring/keyring_rpc.o 00:05:17.174 LIB libspdk_env_dpdk.a 00:05:17.433 SO libspdk_env_dpdk.so.14.1 00:05:17.433 LIB libspdk_notify.a 00:05:17.433 LIB libspdk_keyring.a 00:05:17.433 SO libspdk_notify.so.6.0 00:05:17.433 SO libspdk_keyring.so.1.0 00:05:17.690 SYMLINK libspdk_notify.so 00:05:17.690 SYMLINK libspdk_env_dpdk.so 00:05:17.691 SYMLINK libspdk_keyring.so 00:05:17.691 LIB libspdk_trace.a 00:05:17.691 SO libspdk_trace.so.10.0 00:05:17.691 SYMLINK libspdk_trace.so 00:05:17.948 CC lib/thread/thread.o 00:05:17.949 CC lib/sock/sock_rpc.o 00:05:17.949 CC lib/sock/sock.o 00:05:17.949 CC lib/thread/iobuf.o 00:05:18.515 LIB libspdk_sock.a 00:05:18.515 SO libspdk_sock.so.10.0 00:05:18.773 SYMLINK libspdk_sock.so 00:05:19.030 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:19.030 CC lib/nvme/nvme_fabric.o 00:05:19.030 CC lib/nvme/nvme_ctrlr.o 00:05:19.030 CC lib/nvme/nvme_ns_cmd.o 00:05:19.030 CC lib/nvme/nvme_ns.o 00:05:19.030 CC lib/nvme/nvme_pcie_common.o 00:05:19.030 CC lib/nvme/nvme_pcie.o 00:05:19.030 CC lib/nvme/nvme_qpair.o 00:05:19.030 CC lib/nvme/nvme.o 00:05:20.405 CC lib/nvme/nvme_quirks.o 00:05:20.405 CC lib/nvme/nvme_transport.o 00:05:20.405 CC lib/nvme/nvme_discovery.o 00:05:20.405 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:20.663 LIB libspdk_thread.a 00:05:20.663 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:20.663 SO libspdk_thread.so.10.1 00:05:20.921 SYMLINK libspdk_thread.so 00:05:20.921 CC lib/nvme/nvme_tcp.o 00:05:20.921 CC lib/nvme/nvme_opal.o 00:05:21.179 CC lib/nvme/nvme_io_msg.o 00:05:21.179 CC lib/nvme/nvme_poll_group.o 00:05:21.179 CC lib/accel/accel.o 00:05:21.745 CC lib/accel/accel_rpc.o 00:05:21.745 CC lib/accel/accel_sw.o 00:05:21.745 CC lib/nvme/nvme_zns.o 00:05:22.313 CC lib/blob/blobstore.o 00:05:22.313 CC lib/blob/request.o 00:05:22.571 CC lib/init/json_config.o 00:05:22.571 CC lib/init/subsystem.o 00:05:22.571 CC lib/nvme/nvme_stubs.o 00:05:22.830 CC lib/virtio/virtio.o 00:05:22.830 CC lib/virtio/virtio_vhost_user.o 00:05:23.089 CC lib/virtio/virtio_vfio_user.o 00:05:23.089 CC lib/init/subsystem_rpc.o 00:05:23.089 CC lib/init/rpc.o 00:05:23.348 CC lib/blob/zeroes.o 00:05:23.349 LIB libspdk_init.a 00:05:23.349 CC lib/nvme/nvme_auth.o 00:05:23.607 SO libspdk_init.so.5.0 00:05:23.607 CC lib/nvme/nvme_cuse.o 00:05:23.607 SYMLINK libspdk_init.so 00:05:23.607 CC lib/virtio/virtio_pci.o 00:05:23.607 CC lib/nvme/nvme_rdma.o 00:05:23.607 CC lib/blob/blob_bs_dev.o 00:05:23.865 LIB libspdk_accel.a 00:05:23.865 SO libspdk_accel.so.15.1 00:05:24.124 SYMLINK libspdk_accel.so 00:05:24.124 CC lib/event/app.o 00:05:24.124 CC lib/event/reactor.o 00:05:24.383 CC lib/event/log_rpc.o 00:05:24.383 LIB libspdk_virtio.a 00:05:24.383 SO libspdk_virtio.so.7.0 00:05:24.383 CC lib/bdev/bdev.o 00:05:24.644 CC lib/bdev/bdev_rpc.o 00:05:24.644 CC lib/bdev/bdev_zone.o 00:05:24.644 SYMLINK libspdk_virtio.so 00:05:24.644 CC lib/bdev/part.o 00:05:24.902 CC lib/bdev/scsi_nvme.o 00:05:24.902 CC lib/event/app_rpc.o 00:05:25.160 CC lib/event/scheduler_static.o 00:05:25.726 LIB libspdk_event.a 00:05:25.726 SO libspdk_event.so.14.0 00:05:25.727 SYMLINK libspdk_event.so 00:05:26.292 LIB libspdk_nvme.a 00:05:26.551 SO libspdk_nvme.so.13.1 00:05:27.117 SYMLINK libspdk_nvme.so 00:05:29.020 LIB libspdk_blob.a 00:05:29.020 LIB libspdk_bdev.a 00:05:29.020 SO libspdk_blob.so.11.0 00:05:29.278 SO libspdk_bdev.so.15.1 00:05:29.278 SYMLINK libspdk_blob.so 00:05:29.278 SYMLINK libspdk_bdev.so 00:05:29.536 CC lib/blobfs/blobfs.o 00:05:29.536 CC lib/blobfs/tree.o 00:05:29.536 CC lib/lvol/lvol.o 00:05:29.536 CC lib/nvmf/ctrlr.o 00:05:29.536 CC lib/nvmf/ctrlr_discovery.o 00:05:29.536 CC lib/nvmf/ctrlr_bdev.o 00:05:29.536 CC lib/ublk/ublk.o 00:05:29.536 CC lib/ftl/ftl_core.o 00:05:29.536 CC lib/nbd/nbd.o 00:05:29.537 CC lib/scsi/dev.o 00:05:29.795 CC lib/scsi/lun.o 00:05:30.053 CC lib/ublk/ublk_rpc.o 00:05:30.311 CC lib/ftl/ftl_init.o 00:05:30.311 CC lib/nvmf/subsystem.o 00:05:30.311 CC lib/scsi/port.o 00:05:30.569 CC lib/nbd/nbd_rpc.o 00:05:30.827 CC lib/ftl/ftl_layout.o 00:05:30.827 CC lib/scsi/scsi.o 00:05:30.827 CC lib/scsi/scsi_bdev.o 00:05:30.827 LIB libspdk_nbd.a 00:05:30.827 SO libspdk_nbd.so.7.0 00:05:31.084 LIB libspdk_ublk.a 00:05:31.084 SYMLINK libspdk_nbd.so 00:05:31.084 CC lib/scsi/scsi_pr.o 00:05:31.084 CC lib/ftl/ftl_debug.o 00:05:31.084 SO libspdk_ublk.so.3.0 00:05:31.084 CC lib/ftl/ftl_io.o 00:05:31.341 SYMLINK libspdk_ublk.so 00:05:31.341 CC lib/ftl/ftl_sb.o 00:05:31.341 LIB libspdk_blobfs.a 00:05:31.341 SO libspdk_blobfs.so.10.0 00:05:31.341 CC lib/nvmf/nvmf.o 00:05:31.598 SYMLINK libspdk_blobfs.so 00:05:31.598 CC lib/ftl/ftl_l2p.o 00:05:31.598 CC lib/ftl/ftl_l2p_flat.o 00:05:31.598 CC lib/scsi/scsi_rpc.o 00:05:31.598 CC lib/scsi/task.o 00:05:31.598 LIB libspdk_lvol.a 00:05:31.856 CC lib/nvmf/nvmf_rpc.o 00:05:31.856 SO libspdk_lvol.so.10.0 00:05:31.856 CC lib/nvmf/transport.o 00:05:31.856 CC lib/nvmf/tcp.o 00:05:31.856 SYMLINK libspdk_lvol.so 00:05:31.856 CC lib/nvmf/stubs.o 00:05:31.856 CC lib/nvmf/mdns_server.o 00:05:31.856 CC lib/ftl/ftl_nv_cache.o 00:05:32.114 LIB libspdk_scsi.a 00:05:32.114 SO libspdk_scsi.so.9.0 00:05:32.372 SYMLINK libspdk_scsi.so 00:05:32.372 CC lib/nvmf/rdma.o 00:05:32.937 CC lib/nvmf/auth.o 00:05:32.937 CC lib/ftl/ftl_band.o 00:05:33.194 CC lib/ftl/ftl_band_ops.o 00:05:33.195 CC lib/ftl/ftl_writer.o 00:05:33.452 CC lib/ftl/ftl_rq.o 00:05:33.453 CC lib/ftl/ftl_reloc.o 00:05:33.453 CC lib/vhost/vhost.o 00:05:33.453 CC lib/iscsi/conn.o 00:05:33.736 CC lib/iscsi/init_grp.o 00:05:33.736 CC lib/iscsi/iscsi.o 00:05:34.020 CC lib/ftl/ftl_l2p_cache.o 00:05:34.020 CC lib/iscsi/md5.o 00:05:34.020 CC lib/iscsi/param.o 00:05:34.278 CC lib/ftl/ftl_p2l.o 00:05:34.278 CC lib/ftl/mngt/ftl_mngt.o 00:05:34.535 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:34.535 CC lib/vhost/vhost_rpc.o 00:05:34.535 CC lib/vhost/vhost_scsi.o 00:05:34.793 CC lib/vhost/vhost_blk.o 00:05:34.793 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:34.793 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:35.050 CC lib/iscsi/portal_grp.o 00:05:35.050 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:35.050 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:35.308 CC lib/vhost/rte_vhost_user.o 00:05:35.308 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:35.566 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:35.566 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:35.566 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:35.822 CC lib/iscsi/tgt_node.o 00:05:35.822 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:35.822 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:35.822 CC lib/iscsi/iscsi_subsystem.o 00:05:36.081 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:36.081 CC lib/ftl/utils/ftl_conf.o 00:05:36.081 CC lib/ftl/utils/ftl_md.o 00:05:36.338 CC lib/ftl/utils/ftl_mempool.o 00:05:36.596 CC lib/iscsi/iscsi_rpc.o 00:05:36.596 CC lib/ftl/utils/ftl_bitmap.o 00:05:36.854 CC lib/ftl/utils/ftl_property.o 00:05:36.854 CC lib/iscsi/task.o 00:05:36.854 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:36.854 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:36.854 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:37.112 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:37.112 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:37.112 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:37.112 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:37.370 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:37.370 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:37.370 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:37.370 LIB libspdk_vhost.a 00:05:37.370 LIB libspdk_iscsi.a 00:05:37.370 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:37.629 SO libspdk_vhost.so.8.0 00:05:37.629 CC lib/ftl/base/ftl_base_dev.o 00:05:37.629 LIB libspdk_nvmf.a 00:05:37.629 SO libspdk_iscsi.so.8.0 00:05:37.629 CC lib/ftl/base/ftl_base_bdev.o 00:05:37.629 CC lib/ftl/ftl_trace.o 00:05:37.910 SYMLINK libspdk_vhost.so 00:05:37.910 SO libspdk_nvmf.so.19.0 00:05:37.910 SYMLINK libspdk_iscsi.so 00:05:38.169 LIB libspdk_ftl.a 00:05:38.428 SYMLINK libspdk_nvmf.so 00:05:38.689 SO libspdk_ftl.so.9.0 00:05:39.622 SYMLINK libspdk_ftl.so 00:05:39.881 CC module/env_dpdk/env_dpdk_rpc.o 00:05:40.139 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:40.139 CC module/accel/ioat/accel_ioat.o 00:05:40.139 CC module/accel/iaa/accel_iaa.o 00:05:40.139 CC module/accel/error/accel_error.o 00:05:40.139 CC module/accel/dsa/accel_dsa.o 00:05:40.139 CC module/blob/bdev/blob_bdev.o 00:05:40.139 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:40.139 CC module/sock/posix/posix.o 00:05:40.139 CC module/keyring/file/keyring.o 00:05:40.139 LIB libspdk_env_dpdk_rpc.a 00:05:40.397 SO libspdk_env_dpdk_rpc.so.6.0 00:05:40.397 SYMLINK libspdk_env_dpdk_rpc.so 00:05:40.397 CC module/keyring/file/keyring_rpc.o 00:05:40.397 LIB libspdk_scheduler_dpdk_governor.a 00:05:40.397 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:40.397 CC module/accel/error/accel_error_rpc.o 00:05:40.655 CC module/accel/ioat/accel_ioat_rpc.o 00:05:40.655 LIB libspdk_scheduler_dynamic.a 00:05:40.655 CC module/accel/iaa/accel_iaa_rpc.o 00:05:40.655 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:40.655 SO libspdk_scheduler_dynamic.so.4.0 00:05:40.655 CC module/accel/dsa/accel_dsa_rpc.o 00:05:40.655 LIB libspdk_blob_bdev.a 00:05:40.655 LIB libspdk_keyring_file.a 00:05:40.655 SYMLINK libspdk_scheduler_dynamic.so 00:05:40.961 SO libspdk_keyring_file.so.1.0 00:05:40.961 SO libspdk_blob_bdev.so.11.0 00:05:40.961 LIB libspdk_accel_ioat.a 00:05:40.961 LIB libspdk_accel_error.a 00:05:40.961 CC module/scheduler/gscheduler/gscheduler.o 00:05:40.961 SO libspdk_accel_ioat.so.6.0 00:05:40.961 SYMLINK libspdk_keyring_file.so 00:05:40.961 SYMLINK libspdk_blob_bdev.so 00:05:40.961 SO libspdk_accel_error.so.2.0 00:05:40.961 LIB libspdk_accel_iaa.a 00:05:40.961 LIB libspdk_accel_dsa.a 00:05:40.961 SYMLINK libspdk_accel_error.so 00:05:40.961 SYMLINK libspdk_accel_ioat.so 00:05:40.961 SO libspdk_accel_iaa.so.3.0 00:05:41.224 SO libspdk_accel_dsa.so.5.0 00:05:41.224 LIB libspdk_scheduler_gscheduler.a 00:05:41.224 CC module/keyring/linux/keyring.o 00:05:41.224 SYMLINK libspdk_accel_iaa.so 00:05:41.224 CC module/keyring/linux/keyring_rpc.o 00:05:41.224 SO libspdk_scheduler_gscheduler.so.4.0 00:05:41.224 SYMLINK libspdk_accel_dsa.so 00:05:41.224 SYMLINK libspdk_scheduler_gscheduler.so 00:05:41.483 LIB libspdk_keyring_linux.a 00:05:41.483 CC module/bdev/error/vbdev_error.o 00:05:41.483 CC module/bdev/lvol/vbdev_lvol.o 00:05:41.483 CC module/bdev/gpt/gpt.o 00:05:41.483 CC module/bdev/delay/vbdev_delay.o 00:05:41.483 SO libspdk_keyring_linux.so.1.0 00:05:41.483 CC module/blobfs/bdev/blobfs_bdev.o 00:05:41.741 CC module/bdev/malloc/bdev_malloc.o 00:05:41.741 SYMLINK libspdk_keyring_linux.so 00:05:41.741 CC module/bdev/null/bdev_null.o 00:05:41.741 CC module/bdev/nvme/bdev_nvme.o 00:05:41.999 CC module/bdev/gpt/vbdev_gpt.o 00:05:41.999 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:41.999 LIB libspdk_sock_posix.a 00:05:41.999 CC module/bdev/error/vbdev_error_rpc.o 00:05:41.999 SO libspdk_sock_posix.so.6.0 00:05:42.257 CC module/bdev/passthru/vbdev_passthru.o 00:05:42.257 CC module/bdev/null/bdev_null_rpc.o 00:05:42.257 SYMLINK libspdk_sock_posix.so 00:05:42.257 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:42.257 LIB libspdk_blobfs_bdev.a 00:05:42.257 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:42.257 LIB libspdk_bdev_error.a 00:05:42.516 SO libspdk_blobfs_bdev.so.6.0 00:05:42.516 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:42.516 SO libspdk_bdev_error.so.6.0 00:05:42.516 LIB libspdk_bdev_gpt.a 00:05:42.516 SO libspdk_bdev_gpt.so.6.0 00:05:42.516 SYMLINK libspdk_blobfs_bdev.so 00:05:42.516 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:42.516 SYMLINK libspdk_bdev_error.so 00:05:42.774 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:42.774 LIB libspdk_bdev_null.a 00:05:42.774 SYMLINK libspdk_bdev_gpt.so 00:05:42.774 LIB libspdk_bdev_delay.a 00:05:42.774 SO libspdk_bdev_null.so.6.0 00:05:42.774 SO libspdk_bdev_delay.so.6.0 00:05:42.774 LIB libspdk_bdev_malloc.a 00:05:42.774 CC module/bdev/nvme/nvme_rpc.o 00:05:42.774 SO libspdk_bdev_malloc.so.6.0 00:05:42.774 SYMLINK libspdk_bdev_null.so 00:05:43.033 SYMLINK libspdk_bdev_delay.so 00:05:43.033 CC module/bdev/nvme/bdev_mdns_client.o 00:05:43.033 LIB libspdk_bdev_passthru.a 00:05:43.033 CC module/bdev/raid/bdev_raid.o 00:05:43.033 SYMLINK libspdk_bdev_malloc.so 00:05:43.033 CC module/bdev/split/vbdev_split.o 00:05:43.033 SO libspdk_bdev_passthru.so.6.0 00:05:43.291 SYMLINK libspdk_bdev_passthru.so 00:05:43.291 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:43.291 CC module/bdev/xnvme/bdev_xnvme.o 00:05:43.550 LIB libspdk_bdev_lvol.a 00:05:43.550 CC module/bdev/split/vbdev_split_rpc.o 00:05:43.550 SO libspdk_bdev_lvol.so.6.0 00:05:43.550 CC module/bdev/aio/bdev_aio.o 00:05:43.550 CC module/bdev/ftl/bdev_ftl.o 00:05:43.550 CC module/bdev/iscsi/bdev_iscsi.o 00:05:43.550 SYMLINK libspdk_bdev_lvol.so 00:05:43.550 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:43.808 LIB libspdk_bdev_split.a 00:05:43.808 SO libspdk_bdev_split.so.6.0 00:05:43.808 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:05:43.808 CC module/bdev/nvme/vbdev_opal.o 00:05:44.066 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:44.066 SYMLINK libspdk_bdev_split.so 00:05:44.066 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:44.066 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:44.066 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:44.066 CC module/bdev/aio/bdev_aio_rpc.o 00:05:44.324 LIB libspdk_bdev_xnvme.a 00:05:44.324 SO libspdk_bdev_xnvme.so.3.0 00:05:44.324 LIB libspdk_bdev_zone_block.a 00:05:44.324 LIB libspdk_bdev_iscsi.a 00:05:44.324 SO libspdk_bdev_zone_block.so.6.0 00:05:44.324 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:44.324 SO libspdk_bdev_iscsi.so.6.0 00:05:44.324 SYMLINK libspdk_bdev_xnvme.so 00:05:44.324 CC module/bdev/raid/bdev_raid_rpc.o 00:05:44.582 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:44.582 LIB libspdk_bdev_aio.a 00:05:44.582 SYMLINK libspdk_bdev_zone_block.so 00:05:44.582 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:44.582 SO libspdk_bdev_aio.so.6.0 00:05:44.582 SYMLINK libspdk_bdev_iscsi.so 00:05:44.582 LIB libspdk_bdev_ftl.a 00:05:44.582 CC module/bdev/raid/bdev_raid_sb.o 00:05:44.582 SYMLINK libspdk_bdev_aio.so 00:05:44.582 CC module/bdev/raid/raid0.o 00:05:44.582 SO libspdk_bdev_ftl.so.6.0 00:05:44.840 CC module/bdev/raid/raid1.o 00:05:44.840 SYMLINK libspdk_bdev_ftl.so 00:05:44.840 CC module/bdev/raid/concat.o 00:05:45.098 LIB libspdk_bdev_virtio.a 00:05:45.098 SO libspdk_bdev_virtio.so.6.0 00:05:45.098 SYMLINK libspdk_bdev_virtio.so 00:05:45.356 LIB libspdk_bdev_raid.a 00:05:45.356 SO libspdk_bdev_raid.so.6.0 00:05:45.614 SYMLINK libspdk_bdev_raid.so 00:05:46.599 LIB libspdk_bdev_nvme.a 00:05:46.599 SO libspdk_bdev_nvme.so.7.0 00:05:46.857 SYMLINK libspdk_bdev_nvme.so 00:05:47.423 CC module/event/subsystems/sock/sock.o 00:05:47.423 CC module/event/subsystems/vmd/vmd.o 00:05:47.423 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:47.423 CC module/event/subsystems/keyring/keyring.o 00:05:47.423 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:47.423 CC module/event/subsystems/iobuf/iobuf.o 00:05:47.423 CC module/event/subsystems/scheduler/scheduler.o 00:05:47.423 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:47.423 LIB libspdk_event_keyring.a 00:05:47.423 LIB libspdk_event_vhost_blk.a 00:05:47.423 SO libspdk_event_keyring.so.1.0 00:05:47.423 LIB libspdk_event_sock.a 00:05:47.681 SO libspdk_event_vhost_blk.so.3.0 00:05:47.681 SO libspdk_event_sock.so.5.0 00:05:47.681 LIB libspdk_event_vmd.a 00:05:47.681 LIB libspdk_event_scheduler.a 00:05:47.681 SYMLINK libspdk_event_keyring.so 00:05:47.681 LIB libspdk_event_iobuf.a 00:05:47.681 SO libspdk_event_vmd.so.6.0 00:05:47.681 SO libspdk_event_scheduler.so.4.0 00:05:47.681 SYMLINK libspdk_event_vhost_blk.so 00:05:47.681 SO libspdk_event_iobuf.so.3.0 00:05:47.682 SYMLINK libspdk_event_sock.so 00:05:47.682 SYMLINK libspdk_event_scheduler.so 00:05:47.682 SYMLINK libspdk_event_vmd.so 00:05:47.682 SYMLINK libspdk_event_iobuf.so 00:05:47.939 CC module/event/subsystems/accel/accel.o 00:05:48.198 LIB libspdk_event_accel.a 00:05:48.198 SO libspdk_event_accel.so.6.0 00:05:48.198 SYMLINK libspdk_event_accel.so 00:05:48.456 CC module/event/subsystems/bdev/bdev.o 00:05:48.714 LIB libspdk_event_bdev.a 00:05:48.714 SO libspdk_event_bdev.so.6.0 00:05:48.714 SYMLINK libspdk_event_bdev.so 00:05:48.971 CC module/event/subsystems/nbd/nbd.o 00:05:48.971 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:48.971 CC module/event/subsystems/ublk/ublk.o 00:05:48.971 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:48.971 CC module/event/subsystems/scsi/scsi.o 00:05:49.227 LIB libspdk_event_scsi.a 00:05:49.227 SO libspdk_event_scsi.so.6.0 00:05:49.227 LIB libspdk_event_ublk.a 00:05:49.227 LIB libspdk_event_nbd.a 00:05:49.227 SO libspdk_event_ublk.so.3.0 00:05:49.228 SO libspdk_event_nbd.so.6.0 00:05:49.486 SYMLINK libspdk_event_scsi.so 00:05:49.486 LIB libspdk_event_nvmf.a 00:05:49.487 SYMLINK libspdk_event_nbd.so 00:05:49.487 SYMLINK libspdk_event_ublk.so 00:05:49.487 SO libspdk_event_nvmf.so.6.0 00:05:49.487 SYMLINK libspdk_event_nvmf.so 00:05:49.487 CC module/event/subsystems/iscsi/iscsi.o 00:05:49.487 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:49.766 LIB libspdk_event_vhost_scsi.a 00:05:49.766 SO libspdk_event_vhost_scsi.so.3.0 00:05:49.766 LIB libspdk_event_iscsi.a 00:05:49.766 SYMLINK libspdk_event_vhost_scsi.so 00:05:49.766 SO libspdk_event_iscsi.so.6.0 00:05:50.025 SYMLINK libspdk_event_iscsi.so 00:05:50.025 SO libspdk.so.6.0 00:05:50.025 SYMLINK libspdk.so 00:05:50.284 CXX app/trace/trace.o 00:05:50.284 CC app/spdk_lspci/spdk_lspci.o 00:05:50.284 CC app/trace_record/trace_record.o 00:05:50.284 CC app/spdk_nvme_perf/perf.o 00:05:50.284 CC app/iscsi_tgt/iscsi_tgt.o 00:05:50.284 CC app/nvmf_tgt/nvmf_main.o 00:05:50.543 CC examples/ioat/perf/perf.o 00:05:50.543 CC app/spdk_tgt/spdk_tgt.o 00:05:50.543 CC test/thread/poller_perf/poller_perf.o 00:05:50.543 CC examples/util/zipf/zipf.o 00:05:50.543 LINK spdk_lspci 00:05:50.543 LINK iscsi_tgt 00:05:50.543 LINK spdk_trace_record 00:05:50.543 LINK poller_perf 00:05:50.801 LINK nvmf_tgt 00:05:50.801 LINK spdk_tgt 00:05:50.801 LINK ioat_perf 00:05:50.801 LINK zipf 00:05:51.059 LINK spdk_trace 00:05:51.059 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:51.059 CC test/app/bdev_svc/bdev_svc.o 00:05:51.059 CC test/dma/test_dma/test_dma.o 00:05:51.059 CC test/app/histogram_perf/histogram_perf.o 00:05:51.059 CC examples/ioat/verify/verify.o 00:05:51.059 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:51.059 CC app/spdk_nvme_identify/identify.o 00:05:51.059 CC app/spdk_nvme_discover/discovery_aer.o 00:05:51.318 LINK interrupt_tgt 00:05:51.318 LINK histogram_perf 00:05:51.318 LINK bdev_svc 00:05:51.318 LINK spdk_nvme_discover 00:05:51.318 LINK verify 00:05:51.318 CC examples/thread/thread/thread_ex.o 00:05:51.575 LINK spdk_nvme_perf 00:05:51.575 LINK test_dma 00:05:51.575 CC test/app/jsoncat/jsoncat.o 00:05:51.575 CC examples/sock/hello_world/hello_sock.o 00:05:51.575 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:51.575 LINK nvme_fuzz 00:05:51.833 LINK jsoncat 00:05:51.833 LINK thread 00:05:51.833 TEST_HEADER include/spdk/accel.h 00:05:51.833 TEST_HEADER include/spdk/accel_module.h 00:05:51.833 TEST_HEADER include/spdk/assert.h 00:05:51.833 TEST_HEADER include/spdk/barrier.h 00:05:51.833 TEST_HEADER include/spdk/base64.h 00:05:51.833 TEST_HEADER include/spdk/bdev.h 00:05:51.833 TEST_HEADER include/spdk/bdev_module.h 00:05:51.833 TEST_HEADER include/spdk/bdev_zone.h 00:05:51.833 TEST_HEADER include/spdk/bit_array.h 00:05:51.833 TEST_HEADER include/spdk/bit_pool.h 00:05:51.833 CC examples/vmd/lsvmd/lsvmd.o 00:05:51.833 TEST_HEADER include/spdk/blob_bdev.h 00:05:51.833 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:51.833 TEST_HEADER include/spdk/blobfs.h 00:05:51.833 CC examples/idxd/perf/perf.o 00:05:51.833 TEST_HEADER include/spdk/blob.h 00:05:51.833 TEST_HEADER include/spdk/conf.h 00:05:51.833 TEST_HEADER include/spdk/config.h 00:05:51.833 TEST_HEADER include/spdk/cpuset.h 00:05:51.833 TEST_HEADER include/spdk/crc16.h 00:05:51.833 TEST_HEADER include/spdk/crc32.h 00:05:51.833 TEST_HEADER include/spdk/crc64.h 00:05:51.833 TEST_HEADER include/spdk/dif.h 00:05:51.833 TEST_HEADER include/spdk/dma.h 00:05:51.833 TEST_HEADER include/spdk/endian.h 00:05:51.833 TEST_HEADER include/spdk/env_dpdk.h 00:05:51.833 TEST_HEADER include/spdk/env.h 00:05:51.833 TEST_HEADER include/spdk/event.h 00:05:51.833 TEST_HEADER include/spdk/fd_group.h 00:05:51.833 TEST_HEADER include/spdk/fd.h 00:05:51.833 TEST_HEADER include/spdk/file.h 00:05:51.833 TEST_HEADER include/spdk/ftl.h 00:05:51.833 TEST_HEADER include/spdk/gpt_spec.h 00:05:51.833 TEST_HEADER include/spdk/hexlify.h 00:05:51.833 TEST_HEADER include/spdk/histogram_data.h 00:05:51.834 TEST_HEADER include/spdk/idxd.h 00:05:51.834 TEST_HEADER include/spdk/idxd_spec.h 00:05:51.834 TEST_HEADER include/spdk/init.h 00:05:51.834 TEST_HEADER include/spdk/ioat.h 00:05:51.834 TEST_HEADER include/spdk/ioat_spec.h 00:05:51.834 TEST_HEADER include/spdk/iscsi_spec.h 00:05:51.834 TEST_HEADER include/spdk/json.h 00:05:51.834 TEST_HEADER include/spdk/jsonrpc.h 00:05:51.834 TEST_HEADER include/spdk/keyring.h 00:05:51.834 TEST_HEADER include/spdk/keyring_module.h 00:05:51.834 TEST_HEADER include/spdk/likely.h 00:05:51.834 TEST_HEADER include/spdk/log.h 00:05:51.834 TEST_HEADER include/spdk/lvol.h 00:05:51.834 TEST_HEADER include/spdk/memory.h 00:05:51.834 TEST_HEADER include/spdk/mmio.h 00:05:51.834 TEST_HEADER include/spdk/nbd.h 00:05:51.834 TEST_HEADER include/spdk/notify.h 00:05:51.834 TEST_HEADER include/spdk/nvme.h 00:05:51.834 TEST_HEADER include/spdk/nvme_intel.h 00:05:51.834 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:51.834 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:51.834 TEST_HEADER include/spdk/nvme_spec.h 00:05:51.834 TEST_HEADER include/spdk/nvme_zns.h 00:05:51.834 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:51.834 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:51.834 TEST_HEADER include/spdk/nvmf.h 00:05:51.834 TEST_HEADER include/spdk/nvmf_spec.h 00:05:51.834 TEST_HEADER include/spdk/nvmf_transport.h 00:05:51.834 TEST_HEADER include/spdk/opal.h 00:05:51.834 TEST_HEADER include/spdk/opal_spec.h 00:05:51.834 TEST_HEADER include/spdk/pci_ids.h 00:05:51.834 TEST_HEADER include/spdk/pipe.h 00:05:51.834 TEST_HEADER include/spdk/queue.h 00:05:51.834 TEST_HEADER include/spdk/reduce.h 00:05:51.834 TEST_HEADER include/spdk/rpc.h 00:05:51.834 TEST_HEADER include/spdk/scheduler.h 00:05:51.834 TEST_HEADER include/spdk/scsi.h 00:05:51.834 TEST_HEADER include/spdk/scsi_spec.h 00:05:51.834 TEST_HEADER include/spdk/sock.h 00:05:51.834 LINK lsvmd 00:05:51.834 TEST_HEADER include/spdk/stdinc.h 00:05:51.834 TEST_HEADER include/spdk/string.h 00:05:51.834 TEST_HEADER include/spdk/thread.h 00:05:51.834 TEST_HEADER include/spdk/trace.h 00:05:51.834 TEST_HEADER include/spdk/trace_parser.h 00:05:51.834 TEST_HEADER include/spdk/tree.h 00:05:51.834 LINK hello_sock 00:05:51.834 TEST_HEADER include/spdk/ublk.h 00:05:51.834 CC app/spdk_top/spdk_top.o 00:05:51.834 TEST_HEADER include/spdk/util.h 00:05:51.834 TEST_HEADER include/spdk/uuid.h 00:05:51.834 TEST_HEADER include/spdk/version.h 00:05:51.834 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:51.834 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:51.834 TEST_HEADER include/spdk/vhost.h 00:05:51.834 TEST_HEADER include/spdk/vmd.h 00:05:52.092 TEST_HEADER include/spdk/xor.h 00:05:52.092 TEST_HEADER include/spdk/zipf.h 00:05:52.092 CXX test/cpp_headers/accel.o 00:05:52.092 CXX test/cpp_headers/accel_module.o 00:05:52.092 CC test/env/vtophys/vtophys.o 00:05:52.092 CC test/env/mem_callbacks/mem_callbacks.o 00:05:52.092 CXX test/cpp_headers/assert.o 00:05:52.092 LINK vtophys 00:05:52.092 LINK idxd_perf 00:05:52.350 CC examples/vmd/led/led.o 00:05:52.350 LINK spdk_nvme_identify 00:05:52.350 CXX test/cpp_headers/barrier.o 00:05:52.350 CXX test/cpp_headers/base64.o 00:05:52.350 CC examples/accel/perf/accel_perf.o 00:05:52.350 CC examples/blob/hello_world/hello_blob.o 00:05:52.350 LINK led 00:05:52.608 CXX test/cpp_headers/bdev.o 00:05:52.608 CC examples/nvme/hello_world/hello_world.o 00:05:52.608 CC examples/nvme/reconnect/reconnect.o 00:05:52.608 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:52.608 LINK hello_blob 00:05:52.867 LINK mem_callbacks 00:05:52.867 CXX test/cpp_headers/bdev_module.o 00:05:52.867 LINK hello_world 00:05:52.867 CC test/event/event_perf/event_perf.o 00:05:53.125 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:53.125 LINK accel_perf 00:05:53.125 CXX test/cpp_headers/bdev_zone.o 00:05:53.125 LINK event_perf 00:05:53.125 CC examples/blob/cli/blobcli.o 00:05:53.125 CXX test/cpp_headers/bit_array.o 00:05:53.125 LINK reconnect 00:05:53.125 LINK env_dpdk_post_init 00:05:53.383 LINK spdk_top 00:05:53.383 CXX test/cpp_headers/bit_pool.o 00:05:53.383 LINK nvme_manage 00:05:53.383 CC test/event/reactor/reactor.o 00:05:53.383 CC examples/nvme/arbitration/arbitration.o 00:05:53.383 CC test/event/reactor_perf/reactor_perf.o 00:05:53.383 CC test/event/app_repeat/app_repeat.o 00:05:53.383 CC test/env/memory/memory_ut.o 00:05:53.383 CXX test/cpp_headers/blob_bdev.o 00:05:53.383 LINK reactor 00:05:53.383 CXX test/cpp_headers/blobfs_bdev.o 00:05:53.642 LINK reactor_perf 00:05:53.642 CC app/vhost/vhost.o 00:05:53.642 LINK app_repeat 00:05:53.642 CXX test/cpp_headers/blobfs.o 00:05:53.642 LINK blobcli 00:05:53.642 LINK arbitration 00:05:53.900 LINK vhost 00:05:53.900 CXX test/cpp_headers/blob.o 00:05:53.900 CC test/event/scheduler/scheduler.o 00:05:53.900 CC test/nvme/aer/aer.o 00:05:53.900 CC examples/nvme/hotplug/hotplug.o 00:05:53.900 LINK iscsi_fuzz 00:05:53.900 CXX test/cpp_headers/conf.o 00:05:53.900 CC examples/bdev/hello_world/hello_bdev.o 00:05:54.159 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:54.159 CXX test/cpp_headers/config.o 00:05:54.159 LINK scheduler 00:05:54.159 CC app/spdk_dd/spdk_dd.o 00:05:54.159 CXX test/cpp_headers/cpuset.o 00:05:54.159 LINK hotplug 00:05:54.159 CC app/fio/nvme/fio_plugin.o 00:05:54.159 LINK hello_bdev 00:05:54.159 LINK aer 00:05:54.159 LINK cmb_copy 00:05:54.417 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:54.417 CXX test/cpp_headers/crc16.o 00:05:54.417 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:54.417 CXX test/cpp_headers/crc32.o 00:05:54.417 CC app/fio/bdev/fio_plugin.o 00:05:54.417 CC test/nvme/reset/reset.o 00:05:54.675 CC examples/nvme/abort/abort.o 00:05:54.675 CC test/env/pci/pci_ut.o 00:05:54.675 LINK spdk_dd 00:05:54.675 CC examples/bdev/bdevperf/bdevperf.o 00:05:54.675 CXX test/cpp_headers/crc64.o 00:05:54.959 CXX test/cpp_headers/dif.o 00:05:54.959 LINK vhost_fuzz 00:05:54.960 LINK memory_ut 00:05:54.960 LINK spdk_nvme 00:05:54.960 LINK reset 00:05:54.960 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:54.960 CXX test/cpp_headers/dma.o 00:05:54.960 LINK abort 00:05:54.960 CXX test/cpp_headers/endian.o 00:05:54.960 LINK pci_ut 00:05:55.218 CC test/app/stub/stub.o 00:05:55.218 CC test/rpc_client/rpc_client_test.o 00:05:55.218 LINK pmr_persistence 00:05:55.218 CXX test/cpp_headers/env_dpdk.o 00:05:55.218 CC test/nvme/sgl/sgl.o 00:05:55.218 CC test/nvme/e2edp/nvme_dp.o 00:05:55.218 LINK spdk_bdev 00:05:55.218 CC test/nvme/overhead/overhead.o 00:05:55.476 LINK stub 00:05:55.476 CXX test/cpp_headers/env.o 00:05:55.476 LINK rpc_client_test 00:05:55.476 LINK sgl 00:05:55.476 CC test/blobfs/mkfs/mkfs.o 00:05:55.476 CXX test/cpp_headers/event.o 00:05:55.476 CC test/accel/dif/dif.o 00:05:55.476 LINK nvme_dp 00:05:55.734 CC test/nvme/err_injection/err_injection.o 00:05:55.734 LINK bdevperf 00:05:55.734 LINK overhead 00:05:55.734 CC test/nvme/startup/startup.o 00:05:55.734 CC test/lvol/esnap/esnap.o 00:05:55.734 CXX test/cpp_headers/fd_group.o 00:05:55.734 CC test/nvme/reserve/reserve.o 00:05:55.734 LINK mkfs 00:05:55.993 LINK err_injection 00:05:55.993 LINK startup 00:05:55.993 CC test/nvme/simple_copy/simple_copy.o 00:05:55.993 CC test/nvme/connect_stress/connect_stress.o 00:05:55.993 CXX test/cpp_headers/fd.o 00:05:55.993 CXX test/cpp_headers/file.o 00:05:55.993 CXX test/cpp_headers/ftl.o 00:05:55.993 LINK reserve 00:05:56.250 LINK simple_copy 00:05:56.250 CXX test/cpp_headers/gpt_spec.o 00:05:56.250 CC test/nvme/boot_partition/boot_partition.o 00:05:56.250 CC examples/nvmf/nvmf/nvmf.o 00:05:56.250 LINK dif 00:05:56.250 LINK connect_stress 00:05:56.250 CXX test/cpp_headers/hexlify.o 00:05:56.250 CXX test/cpp_headers/histogram_data.o 00:05:56.250 CXX test/cpp_headers/idxd.o 00:05:56.525 LINK boot_partition 00:05:56.525 CXX test/cpp_headers/idxd_spec.o 00:05:56.525 CC test/nvme/compliance/nvme_compliance.o 00:05:56.525 CC test/nvme/fused_ordering/fused_ordering.o 00:05:56.525 CXX test/cpp_headers/init.o 00:05:56.525 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:56.525 CC test/nvme/fdp/fdp.o 00:05:56.525 LINK nvmf 00:05:56.525 CC test/nvme/cuse/cuse.o 00:05:56.525 CXX test/cpp_headers/ioat.o 00:05:56.793 CXX test/cpp_headers/ioat_spec.o 00:05:56.793 LINK fused_ordering 00:05:56.793 LINK doorbell_aers 00:05:56.793 CXX test/cpp_headers/iscsi_spec.o 00:05:56.793 CC test/bdev/bdevio/bdevio.o 00:05:56.793 CXX test/cpp_headers/json.o 00:05:56.793 CXX test/cpp_headers/jsonrpc.o 00:05:56.793 CXX test/cpp_headers/keyring.o 00:05:57.049 LINK nvme_compliance 00:05:57.049 LINK fdp 00:05:57.049 CXX test/cpp_headers/keyring_module.o 00:05:57.049 CXX test/cpp_headers/likely.o 00:05:57.049 CXX test/cpp_headers/log.o 00:05:57.049 CXX test/cpp_headers/lvol.o 00:05:57.049 CXX test/cpp_headers/memory.o 00:05:57.049 CXX test/cpp_headers/mmio.o 00:05:57.049 CXX test/cpp_headers/nbd.o 00:05:57.049 CXX test/cpp_headers/notify.o 00:05:57.049 CXX test/cpp_headers/nvme.o 00:05:57.049 CXX test/cpp_headers/nvme_intel.o 00:05:57.306 CXX test/cpp_headers/nvme_ocssd.o 00:05:57.306 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:57.306 CXX test/cpp_headers/nvme_spec.o 00:05:57.306 CXX test/cpp_headers/nvme_zns.o 00:05:57.306 LINK bdevio 00:05:57.306 CXX test/cpp_headers/nvmf_cmd.o 00:05:57.306 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:57.306 CXX test/cpp_headers/nvmf.o 00:05:57.306 CXX test/cpp_headers/nvmf_spec.o 00:05:57.306 CXX test/cpp_headers/nvmf_transport.o 00:05:57.564 CXX test/cpp_headers/opal.o 00:05:57.564 CXX test/cpp_headers/opal_spec.o 00:05:57.564 CXX test/cpp_headers/pci_ids.o 00:05:57.564 CXX test/cpp_headers/pipe.o 00:05:57.564 CXX test/cpp_headers/queue.o 00:05:57.564 CXX test/cpp_headers/reduce.o 00:05:57.564 CXX test/cpp_headers/rpc.o 00:05:57.564 CXX test/cpp_headers/scheduler.o 00:05:57.564 CXX test/cpp_headers/scsi.o 00:05:57.822 CXX test/cpp_headers/scsi_spec.o 00:05:57.822 CXX test/cpp_headers/sock.o 00:05:57.822 CXX test/cpp_headers/stdinc.o 00:05:57.822 CXX test/cpp_headers/string.o 00:05:57.822 CXX test/cpp_headers/thread.o 00:05:57.822 CXX test/cpp_headers/trace.o 00:05:57.822 CXX test/cpp_headers/trace_parser.o 00:05:57.822 CXX test/cpp_headers/tree.o 00:05:57.822 CXX test/cpp_headers/ublk.o 00:05:58.079 CXX test/cpp_headers/util.o 00:05:58.079 CXX test/cpp_headers/uuid.o 00:05:58.079 CXX test/cpp_headers/version.o 00:05:58.079 CXX test/cpp_headers/vfio_user_pci.o 00:05:58.079 CXX test/cpp_headers/vfio_user_spec.o 00:05:58.079 CXX test/cpp_headers/vhost.o 00:05:58.079 CXX test/cpp_headers/vmd.o 00:05:58.079 CXX test/cpp_headers/xor.o 00:05:58.079 LINK cuse 00:05:58.080 CXX test/cpp_headers/zipf.o 00:06:03.346 LINK esnap 00:06:03.346 00:06:03.346 real 2m0.622s 00:06:03.346 user 11m55.244s 00:06:03.346 sys 2m12.564s 00:06:03.346 13:47:27 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:06:03.346 13:47:27 make -- common/autotest_common.sh@10 -- $ set +x 00:06:03.346 ************************************ 00:06:03.346 END TEST make 00:06:03.346 ************************************ 00:06:03.346 13:47:27 -- common/autotest_common.sh@1142 -- $ return 0 00:06:03.346 13:47:27 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:03.346 13:47:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:03.346 13:47:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:03.346 13:47:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:03.346 13:47:27 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:03.346 13:47:27 -- pm/common@44 -- $ pid=5237 00:06:03.346 13:47:27 -- pm/common@50 -- $ kill -TERM 5237 00:06:03.346 13:47:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:03.346 13:47:27 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:03.346 13:47:27 -- pm/common@44 -- $ pid=5239 00:06:03.346 13:47:27 -- pm/common@50 -- $ kill -TERM 5239 00:06:03.606 13:47:27 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:03.606 13:47:27 -- nvmf/common.sh@7 -- # uname -s 00:06:03.606 13:47:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:03.606 13:47:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:03.606 13:47:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:03.606 13:47:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:03.606 13:47:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:03.606 13:47:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:03.606 13:47:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:03.606 13:47:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:03.606 13:47:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:03.606 13:47:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:03.606 13:47:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb411604-4b2f-465f-8445-56ea1ec33608 00:06:03.606 13:47:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb411604-4b2f-465f-8445-56ea1ec33608 00:06:03.606 13:47:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:03.606 13:47:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:03.606 13:47:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:03.606 13:47:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:03.606 13:47:27 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:03.606 13:47:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.606 13:47:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.606 13:47:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.606 13:47:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.606 13:47:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.606 13:47:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.606 13:47:27 -- paths/export.sh@5 -- # export PATH 00:06:03.606 13:47:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.606 13:47:27 -- nvmf/common.sh@47 -- # : 0 00:06:03.606 13:47:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:03.606 13:47:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:03.606 13:47:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:03.606 13:47:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:03.606 13:47:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:03.606 13:47:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:03.606 13:47:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:03.606 13:47:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:03.606 13:47:27 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:03.606 13:47:27 -- spdk/autotest.sh@32 -- # uname -s 00:06:03.606 13:47:27 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:03.606 13:47:27 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:03.606 13:47:27 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:03.606 13:47:27 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:03.606 13:47:27 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:03.606 13:47:27 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:03.606 13:47:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:03.606 13:47:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:03.606 13:47:28 -- spdk/autotest.sh@48 -- # udevadm_pid=54173 00:06:03.606 13:47:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:03.606 13:47:28 -- pm/common@17 -- # local monitor 00:06:03.606 13:47:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:03.606 13:47:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:03.606 13:47:28 -- pm/common@25 -- # sleep 1 00:06:03.606 13:47:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:03.606 13:47:28 -- pm/common@21 -- # date +%s 00:06:03.606 13:47:28 -- pm/common@21 -- # date +%s 00:06:03.606 13:47:28 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721051248 00:06:03.606 13:47:28 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721051248 00:06:03.606 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721051248_collect-vmstat.pm.log 00:06:03.606 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721051248_collect-cpu-load.pm.log 00:06:04.543 13:47:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:04.543 13:47:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:04.543 13:47:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:04.543 13:47:29 -- common/autotest_common.sh@10 -- # set +x 00:06:04.543 13:47:29 -- spdk/autotest.sh@59 -- # create_test_list 00:06:04.543 13:47:29 -- common/autotest_common.sh@746 -- # xtrace_disable 00:06:04.543 13:47:29 -- common/autotest_common.sh@10 -- # set +x 00:06:04.543 13:47:29 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:04.543 13:47:29 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:04.543 13:47:29 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:04.543 13:47:29 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:04.543 13:47:29 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:04.544 13:47:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:04.544 13:47:29 -- common/autotest_common.sh@1455 -- # uname 00:06:04.544 13:47:29 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:06:04.544 13:47:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:04.544 13:47:29 -- common/autotest_common.sh@1475 -- # uname 00:06:04.544 13:47:29 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:06:04.544 13:47:29 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:06:04.544 13:47:29 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:06:04.544 13:47:29 -- spdk/autotest.sh@72 -- # hash lcov 00:06:04.544 13:47:29 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:06:04.544 13:47:29 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:06:04.544 --rc lcov_branch_coverage=1 00:06:04.544 --rc lcov_function_coverage=1 00:06:04.544 --rc genhtml_branch_coverage=1 00:06:04.544 --rc genhtml_function_coverage=1 00:06:04.544 --rc genhtml_legend=1 00:06:04.544 --rc geninfo_all_blocks=1 00:06:04.544 ' 00:06:04.544 13:47:29 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:06:04.544 --rc lcov_branch_coverage=1 00:06:04.544 --rc lcov_function_coverage=1 00:06:04.544 --rc genhtml_branch_coverage=1 00:06:04.544 --rc genhtml_function_coverage=1 00:06:04.544 --rc genhtml_legend=1 00:06:04.544 --rc geninfo_all_blocks=1 00:06:04.544 ' 00:06:04.544 13:47:29 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:06:04.544 --rc lcov_branch_coverage=1 00:06:04.544 --rc lcov_function_coverage=1 00:06:04.544 --rc genhtml_branch_coverage=1 00:06:04.544 --rc genhtml_function_coverage=1 00:06:04.544 --rc genhtml_legend=1 00:06:04.544 --rc geninfo_all_blocks=1 00:06:04.544 --no-external' 00:06:04.544 13:47:29 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:06:04.544 --rc lcov_branch_coverage=1 00:06:04.544 --rc lcov_function_coverage=1 00:06:04.544 --rc genhtml_branch_coverage=1 00:06:04.544 --rc genhtml_function_coverage=1 00:06:04.544 --rc genhtml_legend=1 00:06:04.544 --rc geninfo_all_blocks=1 00:06:04.544 --no-external' 00:06:04.544 13:47:29 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:06:04.803 lcov: LCOV version 1.14 00:06:04.803 13:47:29 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:22.913 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:22.913 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:06:35.117 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:06:35.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:06:35.118 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:06:35.118 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:06:39.374 13:48:03 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:06:39.374 13:48:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:39.374 13:48:03 -- common/autotest_common.sh@10 -- # set +x 00:06:39.374 13:48:03 -- spdk/autotest.sh@91 -- # rm -f 00:06:39.374 13:48:03 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:39.374 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:39.941 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:39.941 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:39.941 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:06:39.941 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:06:39.941 13:48:04 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:06:39.941 13:48:04 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:06:39.941 13:48:04 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:06:39.941 13:48:04 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:06:39.941 13:48:04 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:39.941 13:48:04 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:06:39.941 13:48:04 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:06:39.941 13:48:04 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:39.941 13:48:04 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:39.941 13:48:04 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:39.941 13:48:04 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:06:39.941 13:48:04 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:06:39.941 13:48:04 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:39.941 13:48:04 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:39.941 13:48:04 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:39.941 13:48:04 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:06:39.941 13:48:04 -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:06:39.941 13:48:04 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:39.941 13:48:04 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:39.941 13:48:04 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:39.941 13:48:04 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:06:39.942 13:48:04 -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:06:39.942 13:48:04 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:39.942 13:48:04 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:39.942 13:48:04 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:39.942 13:48:04 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:06:39.942 13:48:04 -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:06:39.942 13:48:04 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:39.942 13:48:04 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:39.942 13:48:04 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:39.942 13:48:04 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:06:39.942 13:48:04 -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:06:39.942 13:48:04 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:39.942 13:48:04 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:39.942 13:48:04 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:39.942 13:48:04 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:06:39.942 13:48:04 -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:06:39.942 13:48:04 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:39.942 13:48:04 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:39.942 13:48:04 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:06:39.942 13:48:04 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:39.942 13:48:04 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:39.942 13:48:04 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:06:39.942 13:48:04 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:06:39.942 13:48:04 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:39.942 No valid GPT data, bailing 00:06:39.942 13:48:04 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:39.942 13:48:04 -- scripts/common.sh@391 -- # pt= 00:06:39.942 13:48:04 -- scripts/common.sh@392 -- # return 1 00:06:39.942 13:48:04 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:40.212 1+0 records in 00:06:40.212 1+0 records out 00:06:40.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118377 s, 88.6 MB/s 00:06:40.212 13:48:04 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:40.212 13:48:04 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:40.212 13:48:04 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:06:40.212 13:48:04 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:06:40.212 13:48:04 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:40.212 No valid GPT data, bailing 00:06:40.212 13:48:04 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:40.212 13:48:04 -- scripts/common.sh@391 -- # pt= 00:06:40.212 13:48:04 -- scripts/common.sh@392 -- # return 1 00:06:40.212 13:48:04 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:40.212 1+0 records in 00:06:40.212 1+0 records out 00:06:40.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0041095 s, 255 MB/s 00:06:40.212 13:48:04 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:40.212 13:48:04 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:40.212 13:48:04 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:06:40.212 13:48:04 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:06:40.212 13:48:04 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:06:40.212 No valid GPT data, bailing 00:06:40.212 13:48:04 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:06:40.212 13:48:04 -- scripts/common.sh@391 -- # pt= 00:06:40.212 13:48:04 -- scripts/common.sh@392 -- # return 1 00:06:40.212 13:48:04 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:06:40.212 1+0 records in 00:06:40.212 1+0 records out 00:06:40.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0044908 s, 233 MB/s 00:06:40.212 13:48:04 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:40.212 13:48:04 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:40.212 13:48:04 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:06:40.212 13:48:04 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:06:40.212 13:48:04 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:06:40.212 No valid GPT data, bailing 00:06:40.212 13:48:04 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:06:40.212 13:48:04 -- scripts/common.sh@391 -- # pt= 00:06:40.212 13:48:04 -- scripts/common.sh@392 -- # return 1 00:06:40.212 13:48:04 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:06:40.212 1+0 records in 00:06:40.212 1+0 records out 00:06:40.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00423121 s, 248 MB/s 00:06:40.212 13:48:04 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:40.212 13:48:04 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:40.212 13:48:04 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:06:40.212 13:48:04 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:06:40.212 13:48:04 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:06:40.483 No valid GPT data, bailing 00:06:40.483 13:48:04 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:06:40.483 13:48:04 -- scripts/common.sh@391 -- # pt= 00:06:40.483 13:48:04 -- scripts/common.sh@392 -- # return 1 00:06:40.483 13:48:04 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:06:40.483 1+0 records in 00:06:40.483 1+0 records out 00:06:40.483 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0044636 s, 235 MB/s 00:06:40.483 13:48:04 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:40.483 13:48:04 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:40.483 13:48:04 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:06:40.483 13:48:04 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:06:40.483 13:48:04 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:06:40.483 No valid GPT data, bailing 00:06:40.483 13:48:04 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:06:40.483 13:48:04 -- scripts/common.sh@391 -- # pt= 00:06:40.483 13:48:04 -- scripts/common.sh@392 -- # return 1 00:06:40.483 13:48:04 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:06:40.483 1+0 records in 00:06:40.483 1+0 records out 00:06:40.483 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00458402 s, 229 MB/s 00:06:40.483 13:48:04 -- spdk/autotest.sh@118 -- # sync 00:06:40.483 13:48:04 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:40.483 13:48:04 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:40.483 13:48:04 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:42.385 13:48:06 -- spdk/autotest.sh@124 -- # uname -s 00:06:42.385 13:48:06 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:06:42.385 13:48:06 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:06:42.385 13:48:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.385 13:48:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.385 13:48:06 -- common/autotest_common.sh@10 -- # set +x 00:06:42.385 ************************************ 00:06:42.385 START TEST setup.sh 00:06:42.385 ************************************ 00:06:42.385 13:48:06 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:06:42.385 * Looking for test storage... 00:06:42.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:42.385 13:48:06 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:06:42.385 13:48:06 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:06:42.385 13:48:06 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:06:42.385 13:48:06 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.385 13:48:06 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.385 13:48:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:42.385 ************************************ 00:06:42.385 START TEST acl 00:06:42.385 ************************************ 00:06:42.385 13:48:06 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:06:42.385 * Looking for test storage... 00:06:42.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:42.385 13:48:06 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:06:42.385 13:48:06 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:06:42.385 13:48:06 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:06:42.385 13:48:06 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:06:42.385 13:48:06 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:42.385 13:48:06 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:06:42.385 13:48:06 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:06:42.385 13:48:06 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:42.385 13:48:06 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:42.385 13:48:06 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:42.385 13:48:06 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:06:42.385 13:48:06 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:06:42.385 13:48:06 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:42.385 13:48:06 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:42.385 13:48:06 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:42.385 13:48:06 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:06:42.385 13:48:06 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:06:42.385 13:48:06 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:42.385 13:48:06 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:42.385 13:48:06 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:42.385 13:48:06 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:06:42.385 13:48:06 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:06:42.385 13:48:06 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:42.385 13:48:06 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:42.385 13:48:06 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:42.385 13:48:06 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:06:42.385 13:48:06 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:06:42.386 13:48:06 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:42.386 13:48:06 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:42.386 13:48:06 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:42.386 13:48:06 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:06:42.386 13:48:06 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:06:42.386 13:48:06 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:42.386 13:48:06 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:42.386 13:48:06 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:42.386 13:48:06 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:06:42.386 13:48:06 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:06:42.386 13:48:06 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:42.386 13:48:06 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:42.386 13:48:06 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:06:42.386 13:48:06 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:06:42.386 13:48:06 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:06:42.386 13:48:06 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:06:42.386 13:48:06 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:06:42.386 13:48:06 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:42.386 13:48:06 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:43.763 13:48:08 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:06:43.763 13:48:08 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:06:43.763 13:48:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:43.763 13:48:08 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:06:43.763 13:48:08 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:06:43.763 13:48:08 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:44.330 13:48:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:06:44.330 13:48:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:44.330 13:48:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:44.586 Hugepages 00:06:44.586 node hugesize free / total 00:06:44.586 13:48:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:06:44.586 13:48:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:44.586 13:48:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:44.586 00:06:44.586 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:44.586 13:48:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:06:44.586 13:48:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:44.587 13:48:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:44.587 13:48:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:06:44.587 13:48:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:06:44.587 13:48:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:44.587 13:48:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:44.844 13:48:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:06:44.844 13:48:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:06:44.844 13:48:09 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:06:44.844 13:48:09 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:06:44.844 13:48:09 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:06:44.844 13:48:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:44.844 13:48:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:06:44.844 13:48:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:06:44.844 13:48:09 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:06:44.844 13:48:09 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:06:44.845 13:48:09 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:06:44.845 13:48:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:44.845 13:48:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:06:44.845 13:48:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:06:44.845 13:48:09 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:06:44.845 13:48:09 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:06:44.845 13:48:09 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:06:44.845 13:48:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:45.102 13:48:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:06:45.102 13:48:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:06:45.102 13:48:09 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:06:45.102 13:48:09 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:06:45.102 13:48:09 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:06:45.102 13:48:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:45.102 13:48:09 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:06:45.102 13:48:09 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:06:45.102 13:48:09 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.102 13:48:09 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.102 13:48:09 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:45.102 ************************************ 00:06:45.102 START TEST denied 00:06:45.102 ************************************ 00:06:45.102 13:48:09 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:06:45.102 13:48:09 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:06:45.102 13:48:09 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:06:45.102 13:48:09 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:06:45.102 13:48:09 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:06:45.102 13:48:09 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:46.202 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:06:46.202 13:48:10 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:06:46.202 13:48:10 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:06:46.202 13:48:10 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:06:46.202 13:48:10 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:06:46.202 13:48:10 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:06:46.202 13:48:10 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:06:46.202 13:48:10 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:06:46.202 13:48:10 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:06:46.202 13:48:10 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:46.202 13:48:10 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:52.763 00:06:52.763 real 0m7.156s 00:06:52.763 user 0m0.845s 00:06:52.763 sys 0m1.327s 00:06:52.763 13:48:16 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.763 13:48:16 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:06:52.763 ************************************ 00:06:52.763 END TEST denied 00:06:52.763 ************************************ 00:06:52.763 13:48:16 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:06:52.763 13:48:16 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:06:52.763 13:48:16 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.763 13:48:16 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.763 13:48:16 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:52.763 ************************************ 00:06:52.763 START TEST allowed 00:06:52.763 ************************************ 00:06:52.763 13:48:16 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:06:52.763 13:48:16 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:06:52.763 13:48:16 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:06:52.763 13:48:16 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:06:52.763 13:48:16 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:06:52.763 13:48:16 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:53.332 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:53.332 13:48:17 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:53.332 13:48:17 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:06:53.332 13:48:17 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:06:53.332 13:48:17 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:06:53.332 13:48:17 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:06:53.332 13:48:17 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:06:53.332 13:48:17 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:06:53.332 13:48:17 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:06:53.332 13:48:17 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:06:53.332 13:48:17 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:06:53.332 13:48:17 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:06:53.332 13:48:17 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:06:53.332 13:48:17 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:06:53.332 13:48:17 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:06:53.332 13:48:17 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:06:53.332 13:48:17 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:06:53.332 13:48:17 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:06:53.332 13:48:17 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:06:53.332 13:48:17 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:53.332 13:48:17 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:54.709 00:06:54.709 real 0m2.251s 00:06:54.709 user 0m1.059s 00:06:54.709 sys 0m1.192s 00:06:54.709 13:48:18 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.709 ************************************ 00:06:54.709 END TEST allowed 00:06:54.709 ************************************ 00:06:54.709 13:48:18 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:06:54.709 13:48:18 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:06:54.709 00:06:54.709 real 0m12.083s 00:06:54.709 user 0m3.167s 00:06:54.709 sys 0m3.933s 00:06:54.709 ************************************ 00:06:54.709 END TEST acl 00:06:54.709 ************************************ 00:06:54.709 13:48:18 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.709 13:48:18 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:54.709 13:48:18 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:06:54.709 13:48:18 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:06:54.709 13:48:18 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:54.709 13:48:18 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.709 13:48:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:54.710 ************************************ 00:06:54.710 START TEST hugepages 00:06:54.710 ************************************ 00:06:54.710 13:48:18 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:06:54.710 * Looking for test storage... 00:06:54.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5797740 kB' 'MemAvailable: 7396400 kB' 'Buffers: 2436 kB' 'Cached: 1812104 kB' 'SwapCached: 0 kB' 'Active: 445024 kB' 'Inactive: 1472044 kB' 'Active(anon): 113040 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472044 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 103968 kB' 'Mapped: 48576 kB' 'Shmem: 10512 kB' 'KReclaimable: 63592 kB' 'Slab: 136328 kB' 'SReclaimable: 63592 kB' 'SUnreclaim: 72736 kB' 'KernelStack: 6348 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 326732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.710 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:54.711 13:48:19 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:06:54.711 13:48:19 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:54.711 13:48:19 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.711 13:48:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:54.711 ************************************ 00:06:54.711 START TEST default_setup 00:06:54.711 ************************************ 00:06:54.711 13:48:19 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:06:54.711 13:48:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:06:54.711 13:48:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:06:54.711 13:48:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:54.711 13:48:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:06:54.711 13:48:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:54.711 13:48:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:06:54.711 13:48:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:54.711 13:48:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:54.711 13:48:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:54.712 13:48:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:54.712 13:48:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:06:54.712 13:48:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:54.712 13:48:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:54.712 13:48:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:54.712 13:48:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:54.712 13:48:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:54.712 13:48:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:54.712 13:48:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:54.712 13:48:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:06:54.712 13:48:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:06:54.712 13:48:19 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:06:54.712 13:48:19 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:55.278 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:55.843 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:55.843 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:55.843 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:55.843 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908580 kB' 'MemAvailable: 9507008 kB' 'Buffers: 2436 kB' 'Cached: 1812092 kB' 'SwapCached: 0 kB' 'Active: 462500 kB' 'Inactive: 1472068 kB' 'Active(anon): 130516 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 121608 kB' 'Mapped: 48792 kB' 'Shmem: 10472 kB' 'KReclaimable: 63080 kB' 'Slab: 135380 kB' 'SReclaimable: 63080 kB' 'SUnreclaim: 72300 kB' 'KernelStack: 6320 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 348720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.106 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.107 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908580 kB' 'MemAvailable: 9507008 kB' 'Buffers: 2436 kB' 'Cached: 1812092 kB' 'SwapCached: 0 kB' 'Active: 462252 kB' 'Inactive: 1472068 kB' 'Active(anon): 130268 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 121400 kB' 'Mapped: 48664 kB' 'Shmem: 10472 kB' 'KReclaimable: 63080 kB' 'Slab: 135380 kB' 'SReclaimable: 63080 kB' 'SUnreclaim: 72300 kB' 'KernelStack: 6272 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 348720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.108 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.109 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908580 kB' 'MemAvailable: 9507008 kB' 'Buffers: 2436 kB' 'Cached: 1812092 kB' 'SwapCached: 0 kB' 'Active: 462480 kB' 'Inactive: 1472068 kB' 'Active(anon): 130496 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 121620 kB' 'Mapped: 48664 kB' 'Shmem: 10472 kB' 'KReclaimable: 63080 kB' 'Slab: 135372 kB' 'SReclaimable: 63080 kB' 'SUnreclaim: 72292 kB' 'KernelStack: 6272 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 348720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.110 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.111 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:56.112 nr_hugepages=1024 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:56.112 resv_hugepages=0 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:56.112 surplus_hugepages=0 00:06:56.112 anon_hugepages=0 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908580 kB' 'MemAvailable: 9507008 kB' 'Buffers: 2436 kB' 'Cached: 1812092 kB' 'SwapCached: 0 kB' 'Active: 462296 kB' 'Inactive: 1472068 kB' 'Active(anon): 130312 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 121432 kB' 'Mapped: 48664 kB' 'Shmem: 10472 kB' 'KReclaimable: 63080 kB' 'Slab: 135372 kB' 'SReclaimable: 63080 kB' 'SUnreclaim: 72292 kB' 'KernelStack: 6256 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 348720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.112 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:56.113 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908580 kB' 'MemUsed: 4333392 kB' 'SwapCached: 0 kB' 'Active: 462560 kB' 'Inactive: 1472068 kB' 'Active(anon): 130576 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'FilePages: 1814528 kB' 'Mapped: 48664 kB' 'AnonPages: 121672 kB' 'Shmem: 10472 kB' 'KernelStack: 6256 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63080 kB' 'Slab: 135372 kB' 'SReclaimable: 63080 kB' 'SUnreclaim: 72292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.114 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.115 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.115 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.115 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.115 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.115 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.115 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.115 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.115 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.115 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.115 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.115 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.115 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.115 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.115 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:56.115 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:56.115 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:56.115 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.115 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:56.115 13:48:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:56.115 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:56.115 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:56.115 node0=1024 expecting 1024 00:06:56.115 ************************************ 00:06:56.115 END TEST default_setup 00:06:56.115 ************************************ 00:06:56.115 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:56.115 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:56.115 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:56.115 13:48:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:56.115 00:06:56.115 real 0m1.483s 00:06:56.115 user 0m0.667s 00:06:56.115 sys 0m0.750s 00:06:56.115 13:48:20 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.115 13:48:20 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:06:56.115 13:48:20 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:06:56.115 13:48:20 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:06:56.115 13:48:20 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.115 13:48:20 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.115 13:48:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:56.375 ************************************ 00:06:56.375 START TEST per_node_1G_alloc 00:06:56.375 ************************************ 00:06:56.375 13:48:20 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:06:56.375 13:48:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:06:56.375 13:48:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:06:56.375 13:48:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:06:56.375 13:48:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:56.375 13:48:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:06:56.375 13:48:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:56.375 13:48:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:06:56.375 13:48:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:56.375 13:48:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:56.375 13:48:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:56.375 13:48:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:56.375 13:48:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:56.375 13:48:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:56.375 13:48:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:56.375 13:48:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:56.375 13:48:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:56.375 13:48:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:56.375 13:48:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:56.375 13:48:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:06:56.375 13:48:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:06:56.375 13:48:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:06:56.375 13:48:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:06:56.375 13:48:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:06:56.375 13:48:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:56.375 13:48:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:56.635 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:56.899 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:56.899 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:56.899 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:56.899 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8957748 kB' 'MemAvailable: 10556180 kB' 'Buffers: 2436 kB' 'Cached: 1812092 kB' 'SwapCached: 0 kB' 'Active: 462668 kB' 'Inactive: 1472072 kB' 'Active(anon): 130684 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 121520 kB' 'Mapped: 48776 kB' 'Shmem: 10472 kB' 'KReclaimable: 63080 kB' 'Slab: 135396 kB' 'SReclaimable: 63080 kB' 'SUnreclaim: 72316 kB' 'KernelStack: 6304 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 348720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.899 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.900 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8957496 kB' 'MemAvailable: 10555928 kB' 'Buffers: 2436 kB' 'Cached: 1812092 kB' 'SwapCached: 0 kB' 'Active: 462404 kB' 'Inactive: 1472072 kB' 'Active(anon): 130420 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 121572 kB' 'Mapped: 48664 kB' 'Shmem: 10472 kB' 'KReclaimable: 63080 kB' 'Slab: 135392 kB' 'SReclaimable: 63080 kB' 'SUnreclaim: 72312 kB' 'KernelStack: 6272 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 348720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.901 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8957496 kB' 'MemAvailable: 10555928 kB' 'Buffers: 2436 kB' 'Cached: 1812092 kB' 'SwapCached: 0 kB' 'Active: 462436 kB' 'Inactive: 1472072 kB' 'Active(anon): 130452 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 121572 kB' 'Mapped: 48664 kB' 'Shmem: 10472 kB' 'KReclaimable: 63080 kB' 'Slab: 135392 kB' 'SReclaimable: 63080 kB' 'SUnreclaim: 72312 kB' 'KernelStack: 6272 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 348720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.902 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.903 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:56.904 nr_hugepages=512 00:06:56.904 resv_hugepages=0 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:56.904 surplus_hugepages=0 00:06:56.904 anon_hugepages=0 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8957496 kB' 'MemAvailable: 10555928 kB' 'Buffers: 2436 kB' 'Cached: 1812092 kB' 'SwapCached: 0 kB' 'Active: 462340 kB' 'Inactive: 1472072 kB' 'Active(anon): 130356 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 121460 kB' 'Mapped: 48664 kB' 'Shmem: 10472 kB' 'KReclaimable: 63080 kB' 'Slab: 135388 kB' 'SReclaimable: 63080 kB' 'SUnreclaim: 72308 kB' 'KernelStack: 6256 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 348720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.904 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.905 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8957496 kB' 'MemUsed: 3284476 kB' 'SwapCached: 0 kB' 'Active: 462344 kB' 'Inactive: 1472072 kB' 'Active(anon): 130360 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'FilePages: 1814528 kB' 'Mapped: 48664 kB' 'AnonPages: 121456 kB' 'Shmem: 10472 kB' 'KernelStack: 6256 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63080 kB' 'Slab: 135388 kB' 'SReclaimable: 63080 kB' 'SUnreclaim: 72308 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.906 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:56.907 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:57.167 node0=512 expecting 512 00:06:57.167 ************************************ 00:06:57.167 END TEST per_node_1G_alloc 00:06:57.167 ************************************ 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:57.167 00:06:57.167 real 0m0.805s 00:06:57.167 user 0m0.390s 00:06:57.167 sys 0m0.418s 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.167 13:48:21 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:57.167 13:48:21 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:06:57.167 13:48:21 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:06:57.167 13:48:21 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:57.167 13:48:21 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.167 13:48:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:57.167 ************************************ 00:06:57.167 START TEST even_2G_alloc 00:06:57.167 ************************************ 00:06:57.167 13:48:21 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:06:57.167 13:48:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:06:57.167 13:48:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:57.167 13:48:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:57.167 13:48:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:57.167 13:48:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:57.167 13:48:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:57.167 13:48:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:57.167 13:48:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:57.167 13:48:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:57.167 13:48:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:57.167 13:48:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:57.167 13:48:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:57.167 13:48:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:57.167 13:48:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:57.167 13:48:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:57.167 13:48:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:06:57.167 13:48:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:57.167 13:48:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:57.168 13:48:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:57.168 13:48:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:06:57.168 13:48:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:06:57.168 13:48:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:06:57.168 13:48:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:57.168 13:48:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:57.443 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:57.708 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:57.708 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:57.708 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:57.708 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908008 kB' 'MemAvailable: 9506444 kB' 'Buffers: 2436 kB' 'Cached: 1812096 kB' 'SwapCached: 0 kB' 'Active: 463284 kB' 'Inactive: 1472076 kB' 'Active(anon): 131300 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122240 kB' 'Mapped: 49080 kB' 'Shmem: 10472 kB' 'KReclaimable: 63080 kB' 'Slab: 135360 kB' 'SReclaimable: 63080 kB' 'SUnreclaim: 72280 kB' 'KernelStack: 6288 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 348720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.709 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7907504 kB' 'MemAvailable: 9505940 kB' 'Buffers: 2436 kB' 'Cached: 1812096 kB' 'SwapCached: 0 kB' 'Active: 462260 kB' 'Inactive: 1472076 kB' 'Active(anon): 130276 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 121664 kB' 'Mapped: 48664 kB' 'Shmem: 10472 kB' 'KReclaimable: 63080 kB' 'Slab: 135436 kB' 'SReclaimable: 63080 kB' 'SUnreclaim: 72356 kB' 'KernelStack: 6268 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 348720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.710 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.711 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7907504 kB' 'MemAvailable: 9505940 kB' 'Buffers: 2436 kB' 'Cached: 1812096 kB' 'SwapCached: 0 kB' 'Active: 462496 kB' 'Inactive: 1472076 kB' 'Active(anon): 130512 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 121948 kB' 'Mapped: 48924 kB' 'Shmem: 10472 kB' 'KReclaimable: 63080 kB' 'Slab: 135436 kB' 'SReclaimable: 63080 kB' 'SUnreclaim: 72356 kB' 'KernelStack: 6284 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 348720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.712 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.713 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:57.714 nr_hugepages=1024 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:57.714 resv_hugepages=0 00:06:57.714 surplus_hugepages=0 00:06:57.714 anon_hugepages=0 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.714 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7907504 kB' 'MemAvailable: 9505948 kB' 'Buffers: 2436 kB' 'Cached: 1812104 kB' 'SwapCached: 0 kB' 'Active: 462368 kB' 'Inactive: 1472084 kB' 'Active(anon): 130384 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 121816 kB' 'Mapped: 48724 kB' 'Shmem: 10472 kB' 'KReclaimable: 63080 kB' 'Slab: 135432 kB' 'SReclaimable: 63080 kB' 'SUnreclaim: 72352 kB' 'KernelStack: 6300 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 348720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.715 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.716 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7907504 kB' 'MemUsed: 4334468 kB' 'SwapCached: 0 kB' 'Active: 462612 kB' 'Inactive: 1472084 kB' 'Active(anon): 130628 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1814540 kB' 'Mapped: 48724 kB' 'AnonPages: 121804 kB' 'Shmem: 10472 kB' 'KernelStack: 6284 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63080 kB' 'Slab: 135432 kB' 'SReclaimable: 63080 kB' 'SUnreclaim: 72352 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.717 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:57.718 node0=1024 expecting 1024 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:57.718 00:06:57.718 real 0m0.703s 00:06:57.718 user 0m0.323s 00:06:57.718 sys 0m0.402s 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.718 13:48:22 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:57.718 ************************************ 00:06:57.718 END TEST even_2G_alloc 00:06:57.718 ************************************ 00:06:57.718 13:48:22 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:06:57.718 13:48:22 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:06:57.718 13:48:22 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:57.718 13:48:22 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.718 13:48:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:57.977 ************************************ 00:06:57.977 START TEST odd_alloc 00:06:57.977 ************************************ 00:06:57.977 13:48:22 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:06:57.977 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:06:57.977 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:06:57.977 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:57.977 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:57.977 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:06:57.977 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:57.977 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:57.977 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:57.977 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:06:57.977 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:57.977 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:57.977 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:57.977 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:57.977 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:57.977 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:57.977 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:06:57.977 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:57.977 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:57.977 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:57.977 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:06:57.977 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:06:57.977 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:06:57.977 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:57.977 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:58.236 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:58.236 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:58.236 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:58.236 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:58.236 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908064 kB' 'MemAvailable: 9506504 kB' 'Buffers: 2436 kB' 'Cached: 1812100 kB' 'SwapCached: 0 kB' 'Active: 462792 kB' 'Inactive: 1472080 kB' 'Active(anon): 130808 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 121884 kB' 'Mapped: 48764 kB' 'Shmem: 10472 kB' 'KReclaimable: 63080 kB' 'Slab: 135420 kB' 'SReclaimable: 63080 kB' 'SUnreclaim: 72340 kB' 'KernelStack: 6292 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 348720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.500 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.501 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908064 kB' 'MemAvailable: 9506504 kB' 'Buffers: 2436 kB' 'Cached: 1812100 kB' 'SwapCached: 0 kB' 'Active: 462460 kB' 'Inactive: 1472080 kB' 'Active(anon): 130476 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 121624 kB' 'Mapped: 48668 kB' 'Shmem: 10472 kB' 'KReclaimable: 63080 kB' 'Slab: 135408 kB' 'SReclaimable: 63080 kB' 'SUnreclaim: 72328 kB' 'KernelStack: 6272 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 348720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.502 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908064 kB' 'MemAvailable: 9506504 kB' 'Buffers: 2436 kB' 'Cached: 1812100 kB' 'SwapCached: 0 kB' 'Active: 462272 kB' 'Inactive: 1472080 kB' 'Active(anon): 130288 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 121380 kB' 'Mapped: 48668 kB' 'Shmem: 10472 kB' 'KReclaimable: 63080 kB' 'Slab: 135408 kB' 'SReclaimable: 63080 kB' 'SUnreclaim: 72328 kB' 'KernelStack: 6272 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 348720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.503 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.504 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:06:58.505 nr_hugepages=1025 00:06:58.505 resv_hugepages=0 00:06:58.505 surplus_hugepages=0 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:58.505 anon_hugepages=0 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908064 kB' 'MemAvailable: 9506504 kB' 'Buffers: 2436 kB' 'Cached: 1812100 kB' 'SwapCached: 0 kB' 'Active: 462232 kB' 'Inactive: 1472080 kB' 'Active(anon): 130248 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 121372 kB' 'Mapped: 48668 kB' 'Shmem: 10472 kB' 'KReclaimable: 63080 kB' 'Slab: 135404 kB' 'SReclaimable: 63080 kB' 'SUnreclaim: 72324 kB' 'KernelStack: 6272 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 348720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.505 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.506 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908064 kB' 'MemUsed: 4333908 kB' 'SwapCached: 0 kB' 'Active: 462484 kB' 'Inactive: 1472080 kB' 'Active(anon): 130500 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 1814536 kB' 'Mapped: 48668 kB' 'AnonPages: 121620 kB' 'Shmem: 10472 kB' 'KernelStack: 6272 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63080 kB' 'Slab: 135400 kB' 'SReclaimable: 63080 kB' 'SUnreclaim: 72320 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.507 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:58.508 node0=1025 expecting 1025 00:06:58.508 ************************************ 00:06:58.508 END TEST odd_alloc 00:06:58.508 ************************************ 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:06:58.508 00:06:58.508 real 0m0.707s 00:06:58.508 user 0m0.318s 00:06:58.508 sys 0m0.400s 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.508 13:48:22 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:58.508 13:48:22 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:06:58.508 13:48:22 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:06:58.508 13:48:23 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:58.508 13:48:23 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.508 13:48:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:58.508 ************************************ 00:06:58.508 START TEST custom_alloc 00:06:58.508 ************************************ 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:58.508 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:58.509 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:06:58.509 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:06:58.509 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:06:58.509 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:58.509 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:59.078 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:59.078 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:59.078 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:59.078 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:59.078 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:59.078 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:06:59.078 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:06:59.078 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:06:59.078 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:59.078 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:59.078 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:59.078 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:59.078 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:59.078 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:59.078 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:59.078 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:59.078 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:59.078 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:59.078 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:59.078 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:59.078 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:59.078 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:59.078 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:59.078 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:59.078 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8956108 kB' 'MemAvailable: 10554552 kB' 'Buffers: 2436 kB' 'Cached: 1812096 kB' 'SwapCached: 0 kB' 'Active: 463032 kB' 'Inactive: 1472076 kB' 'Active(anon): 131048 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122160 kB' 'Mapped: 48828 kB' 'Shmem: 10472 kB' 'KReclaimable: 63096 kB' 'Slab: 135424 kB' 'SReclaimable: 63096 kB' 'SUnreclaim: 72328 kB' 'KernelStack: 6344 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 348720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.079 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8956108 kB' 'MemAvailable: 10554552 kB' 'Buffers: 2436 kB' 'Cached: 1812096 kB' 'SwapCached: 0 kB' 'Active: 462556 kB' 'Inactive: 1472076 kB' 'Active(anon): 130572 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 121680 kB' 'Mapped: 48668 kB' 'Shmem: 10472 kB' 'KReclaimable: 63096 kB' 'Slab: 135436 kB' 'SReclaimable: 63096 kB' 'SUnreclaim: 72340 kB' 'KernelStack: 6272 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 348720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.080 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.081 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8956232 kB' 'MemAvailable: 10554676 kB' 'Buffers: 2436 kB' 'Cached: 1812096 kB' 'SwapCached: 0 kB' 'Active: 462504 kB' 'Inactive: 1472076 kB' 'Active(anon): 130520 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 121636 kB' 'Mapped: 48668 kB' 'Shmem: 10472 kB' 'KReclaimable: 63096 kB' 'Slab: 135440 kB' 'SReclaimable: 63096 kB' 'SUnreclaim: 72344 kB' 'KernelStack: 6272 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 348720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.082 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.083 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:59.343 nr_hugepages=512 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:06:59.343 resv_hugepages=0 00:06:59.343 surplus_hugepages=0 00:06:59.343 anon_hugepages=0 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:59.343 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8956232 kB' 'MemAvailable: 10554676 kB' 'Buffers: 2436 kB' 'Cached: 1812096 kB' 'SwapCached: 0 kB' 'Active: 462408 kB' 'Inactive: 1472076 kB' 'Active(anon): 130424 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 121528 kB' 'Mapped: 48668 kB' 'Shmem: 10472 kB' 'KReclaimable: 63096 kB' 'Slab: 135436 kB' 'SReclaimable: 63096 kB' 'SUnreclaim: 72340 kB' 'KernelStack: 6256 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 348720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.344 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8956232 kB' 'MemUsed: 3285740 kB' 'SwapCached: 0 kB' 'Active: 462416 kB' 'Inactive: 1472076 kB' 'Active(anon): 130432 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 1814532 kB' 'Mapped: 48668 kB' 'AnonPages: 121528 kB' 'Shmem: 10472 kB' 'KernelStack: 6256 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63096 kB' 'Slab: 135436 kB' 'SReclaimable: 63096 kB' 'SUnreclaim: 72340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.345 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.346 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.347 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.347 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.347 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.347 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.347 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:59.347 13:48:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:59.347 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:59.347 node0=512 expecting 512 00:06:59.347 ************************************ 00:06:59.347 END TEST custom_alloc 00:06:59.347 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:59.347 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:59.347 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:59.347 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:59.347 13:48:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:59.347 00:06:59.347 real 0m0.695s 00:06:59.347 user 0m0.323s 00:06:59.347 sys 0m0.382s 00:06:59.347 13:48:23 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.347 13:48:23 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:59.347 ************************************ 00:06:59.347 13:48:23 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:06:59.347 13:48:23 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:06:59.347 13:48:23 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.347 13:48:23 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.347 13:48:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:59.347 ************************************ 00:06:59.347 START TEST no_shrink_alloc 00:06:59.347 ************************************ 00:06:59.347 13:48:23 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:06:59.347 13:48:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:06:59.347 13:48:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:59.347 13:48:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:59.347 13:48:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:06:59.347 13:48:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:59.347 13:48:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:06:59.347 13:48:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:59.347 13:48:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:59.347 13:48:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:59.347 13:48:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:59.347 13:48:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:59.347 13:48:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:59.347 13:48:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:59.347 13:48:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:59.347 13:48:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:59.347 13:48:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:59.347 13:48:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:59.347 13:48:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:59.347 13:48:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:06:59.347 13:48:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:06:59.347 13:48:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:59.347 13:48:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:59.605 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:59.870 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:59.870 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:59.870 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:59.870 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908200 kB' 'MemAvailable: 9506640 kB' 'Buffers: 2436 kB' 'Cached: 1812096 kB' 'SwapCached: 0 kB' 'Active: 460076 kB' 'Inactive: 1472076 kB' 'Active(anon): 128092 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 119212 kB' 'Mapped: 48176 kB' 'Shmem: 10472 kB' 'KReclaimable: 63088 kB' 'Slab: 135240 kB' 'SReclaimable: 63088 kB' 'SUnreclaim: 72152 kB' 'KernelStack: 6232 kB' 'PageTables: 3964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.870 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908436 kB' 'MemAvailable: 9506876 kB' 'Buffers: 2436 kB' 'Cached: 1812096 kB' 'SwapCached: 0 kB' 'Active: 459412 kB' 'Inactive: 1472076 kB' 'Active(anon): 127428 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 118528 kB' 'Mapped: 47944 kB' 'Shmem: 10472 kB' 'KReclaimable: 63088 kB' 'Slab: 135272 kB' 'SReclaimable: 63088 kB' 'SUnreclaim: 72184 kB' 'KernelStack: 6192 kB' 'PageTables: 3748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.871 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.872 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908436 kB' 'MemAvailable: 9506876 kB' 'Buffers: 2436 kB' 'Cached: 1812096 kB' 'SwapCached: 0 kB' 'Active: 459096 kB' 'Inactive: 1472076 kB' 'Active(anon): 127112 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 118216 kB' 'Mapped: 47928 kB' 'Shmem: 10472 kB' 'KReclaimable: 63088 kB' 'Slab: 135280 kB' 'SReclaimable: 63088 kB' 'SUnreclaim: 72192 kB' 'KernelStack: 6192 kB' 'PageTables: 3756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.873 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.874 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:59.875 nr_hugepages=1024 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:59.875 resv_hugepages=0 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:59.875 surplus_hugepages=0 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:59.875 anon_hugepages=0 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7909044 kB' 'MemAvailable: 9507484 kB' 'Buffers: 2436 kB' 'Cached: 1812096 kB' 'SwapCached: 0 kB' 'Active: 459068 kB' 'Inactive: 1472076 kB' 'Active(anon): 127084 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 118188 kB' 'Mapped: 47928 kB' 'Shmem: 10472 kB' 'KReclaimable: 63088 kB' 'Slab: 135280 kB' 'SReclaimable: 63088 kB' 'SUnreclaim: 72192 kB' 'KernelStack: 6192 kB' 'PageTables: 3756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.875 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:59.876 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:00.134 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:00.134 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:00.134 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:07:00.134 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:00.134 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:00.134 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:00.134 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:00.134 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:00.134 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:00.134 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:00.134 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.134 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7909044 kB' 'MemUsed: 4332928 kB' 'SwapCached: 0 kB' 'Active: 459256 kB' 'Inactive: 1472076 kB' 'Active(anon): 127272 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 1814532 kB' 'Mapped: 47928 kB' 'AnonPages: 118368 kB' 'Shmem: 10472 kB' 'KernelStack: 6176 kB' 'PageTables: 3700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63088 kB' 'Slab: 135280 kB' 'SReclaimable: 63088 kB' 'SUnreclaim: 72192 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:00.134 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.134 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.134 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:00.135 node0=1024 expecting 1024 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:00.135 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:00.393 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:00.655 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:00.655 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:00.655 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:00.655 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:00.655 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:07:00.655 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:07:00.655 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:07:00.655 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:00.655 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:00.655 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:00.655 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:00.655 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:00.655 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:00.655 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:00.655 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:00.655 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:00.655 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:00.655 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:00.655 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:00.655 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:00.655 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:00.655 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:00.655 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:00.655 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.655 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7909892 kB' 'MemAvailable: 9508336 kB' 'Buffers: 2436 kB' 'Cached: 1812100 kB' 'SwapCached: 0 kB' 'Active: 460304 kB' 'Inactive: 1472080 kB' 'Active(anon): 128320 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 119196 kB' 'Mapped: 48080 kB' 'Shmem: 10472 kB' 'KReclaimable: 63088 kB' 'Slab: 135252 kB' 'SReclaimable: 63088 kB' 'SUnreclaim: 72164 kB' 'KernelStack: 6212 kB' 'PageTables: 3872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.656 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7909892 kB' 'MemAvailable: 9508336 kB' 'Buffers: 2436 kB' 'Cached: 1812100 kB' 'SwapCached: 0 kB' 'Active: 459752 kB' 'Inactive: 1472080 kB' 'Active(anon): 127768 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118636 kB' 'Mapped: 47928 kB' 'Shmem: 10472 kB' 'KReclaimable: 63088 kB' 'Slab: 135248 kB' 'SReclaimable: 63088 kB' 'SUnreclaim: 72160 kB' 'KernelStack: 6176 kB' 'PageTables: 3704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.657 13:48:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.657 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.658 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7909892 kB' 'MemAvailable: 9508336 kB' 'Buffers: 2436 kB' 'Cached: 1812100 kB' 'SwapCached: 0 kB' 'Active: 459380 kB' 'Inactive: 1472080 kB' 'Active(anon): 127396 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118492 kB' 'Mapped: 47928 kB' 'Shmem: 10472 kB' 'KReclaimable: 63088 kB' 'Slab: 135252 kB' 'SReclaimable: 63088 kB' 'SUnreclaim: 72164 kB' 'KernelStack: 6192 kB' 'PageTables: 3752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.659 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.660 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:00.661 nr_hugepages=1024 00:07:00.661 resv_hugepages=0 00:07:00.661 surplus_hugepages=0 00:07:00.661 anon_hugepages=0 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7909892 kB' 'MemAvailable: 9508336 kB' 'Buffers: 2436 kB' 'Cached: 1812100 kB' 'SwapCached: 0 kB' 'Active: 459364 kB' 'Inactive: 1472080 kB' 'Active(anon): 127380 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118488 kB' 'Mapped: 47928 kB' 'Shmem: 10472 kB' 'KReclaimable: 63088 kB' 'Slab: 135244 kB' 'SReclaimable: 63088 kB' 'SUnreclaim: 72156 kB' 'KernelStack: 6192 kB' 'PageTables: 3752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.661 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.662 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7909640 kB' 'MemUsed: 4332332 kB' 'SwapCached: 0 kB' 'Active: 459376 kB' 'Inactive: 1472080 kB' 'Active(anon): 127392 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1472080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1814536 kB' 'Mapped: 47928 kB' 'AnonPages: 118492 kB' 'Shmem: 10472 kB' 'KernelStack: 6192 kB' 'PageTables: 3752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63088 kB' 'Slab: 135244 kB' 'SReclaimable: 63088 kB' 'SUnreclaim: 72156 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.663 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:00.664 node0=1024 expecting 1024 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:00.664 00:07:00.664 real 0m1.382s 00:07:00.664 user 0m0.629s 00:07:00.664 sys 0m0.790s 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.664 13:48:25 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:00.664 ************************************ 00:07:00.664 END TEST no_shrink_alloc 00:07:00.664 ************************************ 00:07:00.664 13:48:25 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:07:00.664 13:48:25 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:07:00.664 13:48:25 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:07:00.664 13:48:25 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:00.664 13:48:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:00.664 13:48:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:00.664 13:48:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:00.664 13:48:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:00.664 13:48:25 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:07:00.664 13:48:25 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:07:00.664 00:07:00.664 real 0m6.223s 00:07:00.664 user 0m2.815s 00:07:00.664 sys 0m3.400s 00:07:00.664 ************************************ 00:07:00.664 END TEST hugepages 00:07:00.664 ************************************ 00:07:00.664 13:48:25 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.664 13:48:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:00.922 13:48:25 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:07:00.922 13:48:25 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:07:00.922 13:48:25 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:00.922 13:48:25 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.922 13:48:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:00.922 ************************************ 00:07:00.922 START TEST driver 00:07:00.922 ************************************ 00:07:00.922 13:48:25 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:07:00.922 * Looking for test storage... 00:07:00.922 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:07:00.922 13:48:25 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:07:00.922 13:48:25 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:00.922 13:48:25 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:07.481 13:48:31 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:07:07.481 13:48:31 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:07.481 13:48:31 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.481 13:48:31 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:07:07.481 ************************************ 00:07:07.481 START TEST guess_driver 00:07:07.481 ************************************ 00:07:07.481 13:48:31 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:07:07.481 13:48:31 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:07:07.481 13:48:31 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:07:07.481 13:48:31 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:07:07.481 13:48:31 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:07:07.481 13:48:31 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:07:07.481 13:48:31 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:07:07.481 13:48:31 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:07:07.481 13:48:31 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:07:07.481 13:48:31 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:07:07.481 13:48:31 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:07:07.481 13:48:31 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:07:07.481 13:48:31 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:07:07.481 13:48:31 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:07:07.481 13:48:31 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:07:07.481 13:48:31 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:07:07.481 13:48:31 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:07:07.481 13:48:31 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:07:07.481 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:07:07.481 13:48:31 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:07:07.481 13:48:31 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:07:07.481 13:48:31 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:07:07.481 13:48:31 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:07:07.481 Looking for driver=uio_pci_generic 00:07:07.481 13:48:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:07.481 13:48:31 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:07:07.481 13:48:31 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:07:07.481 13:48:31 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:07.481 13:48:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:07:07.481 13:48:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:07:07.481 13:48:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:07.738 13:48:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:07.738 13:48:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:07:07.738 13:48:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:07.738 13:48:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:07.738 13:48:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:07:07.738 13:48:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:07.997 13:48:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:07.997 13:48:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:07:07.997 13:48:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:07.997 13:48:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:07.997 13:48:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:07:07.997 13:48:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:07.997 13:48:32 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:07:07.997 13:48:32 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:07:07.997 13:48:32 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:07.997 13:48:32 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:14.555 00:07:14.555 real 0m7.161s 00:07:14.555 user 0m0.796s 00:07:14.555 sys 0m1.440s 00:07:14.555 13:48:38 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.555 13:48:38 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:07:14.555 ************************************ 00:07:14.555 END TEST guess_driver 00:07:14.555 ************************************ 00:07:14.555 13:48:38 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:07:14.555 00:07:14.555 real 0m13.160s 00:07:14.555 user 0m1.138s 00:07:14.555 sys 0m2.201s 00:07:14.555 13:48:38 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.555 13:48:38 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:07:14.555 ************************************ 00:07:14.555 END TEST driver 00:07:14.555 ************************************ 00:07:14.555 13:48:38 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:07:14.555 13:48:38 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:07:14.555 13:48:38 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:14.556 13:48:38 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.556 13:48:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:14.556 ************************************ 00:07:14.556 START TEST devices 00:07:14.556 ************************************ 00:07:14.556 13:48:38 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:07:14.556 * Looking for test storage... 00:07:14.556 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:07:14.556 13:48:38 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:07:14.556 13:48:38 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:07:14.556 13:48:38 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:14.556 13:48:38 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:15.124 13:48:39 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:15.124 13:48:39 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:15.124 13:48:39 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:07:15.124 13:48:39 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:07:15.124 13:48:39 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:07:15.124 13:48:39 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:07:15.124 13:48:39 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:07:15.124 13:48:39 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:15.124 13:48:39 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:07:15.124 13:48:39 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:07:15.124 13:48:39 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:07:15.124 13:48:39 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:07:15.124 13:48:39 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:07:15.124 13:48:39 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:07:15.124 13:48:39 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:07:15.384 No valid GPT data, bailing 00:07:15.384 13:48:39 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:15.384 13:48:39 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:15.384 13:48:39 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:15.384 13:48:39 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:07:15.384 13:48:39 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:15.384 13:48:39 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:15.384 13:48:39 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:07:15.384 13:48:39 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:07:15.384 13:48:39 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:15.384 13:48:39 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:07:15.384 13:48:39 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:15.384 13:48:39 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:07:15.384 13:48:39 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:07:15.384 13:48:39 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:07:15.384 13:48:39 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:07:15.384 13:48:39 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:07:15.384 13:48:39 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:07:15.384 13:48:39 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:07:15.384 No valid GPT data, bailing 00:07:15.384 13:48:39 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:07:15.384 13:48:39 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:15.384 13:48:39 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:15.384 13:48:39 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:07:15.384 13:48:39 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:07:15.384 13:48:39 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:07:15.384 13:48:39 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:07:15.384 13:48:39 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:07:15.384 13:48:39 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:15.384 13:48:39 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:07:15.384 13:48:39 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:15.384 13:48:39 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:07:15.384 13:48:39 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:07:15.384 13:48:39 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:07:15.384 13:48:39 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:07:15.384 13:48:39 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:07:15.384 13:48:39 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:07:15.384 13:48:39 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:07:15.384 No valid GPT data, bailing 00:07:15.384 13:48:39 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:07:15.384 13:48:39 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:15.384 13:48:39 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:15.384 13:48:39 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:07:15.384 13:48:39 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:07:15.384 13:48:39 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:07:15.384 13:48:39 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:07:15.384 13:48:39 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:07:15.384 13:48:39 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:15.384 13:48:39 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:07:15.384 13:48:39 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:15.384 13:48:39 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:07:15.384 13:48:39 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:07:15.384 13:48:39 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:07:15.384 13:48:39 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:07:15.384 13:48:39 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:07:15.384 13:48:39 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:07:15.384 13:48:39 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:07:15.643 No valid GPT data, bailing 00:07:15.643 13:48:39 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:07:15.643 13:48:39 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:15.643 13:48:39 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:15.643 13:48:39 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:07:15.643 13:48:39 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:07:15.643 13:48:39 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:07:15.643 13:48:39 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:07:15.643 13:48:39 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:07:15.643 13:48:39 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:15.643 13:48:39 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:07:15.643 13:48:39 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:15.643 13:48:39 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:07:15.643 13:48:39 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:07:15.643 13:48:39 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:07:15.643 13:48:39 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:07:15.643 13:48:39 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:07:15.643 13:48:39 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:07:15.643 13:48:39 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:07:15.643 No valid GPT data, bailing 00:07:15.643 13:48:40 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:07:15.643 13:48:40 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:15.643 13:48:40 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:15.643 13:48:40 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:07:15.643 13:48:40 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:07:15.643 13:48:40 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:07:15.643 13:48:40 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:07:15.643 13:48:40 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:07:15.643 13:48:40 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:15.643 13:48:40 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:07:15.643 13:48:40 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:15.643 13:48:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:07:15.643 13:48:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:07:15.643 13:48:40 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:07:15.643 13:48:40 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:07:15.643 13:48:40 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:07:15.643 13:48:40 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:07:15.643 13:48:40 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:07:15.643 No valid GPT data, bailing 00:07:15.643 13:48:40 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:07:15.643 13:48:40 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:15.643 13:48:40 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:15.643 13:48:40 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:07:15.643 13:48:40 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:07:15.643 13:48:40 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:07:15.643 13:48:40 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:07:15.643 13:48:40 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:07:15.643 13:48:40 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:07:15.643 13:48:40 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:07:15.643 13:48:40 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:07:15.643 13:48:40 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:15.643 13:48:40 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.643 13:48:40 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:15.643 ************************************ 00:07:15.643 START TEST nvme_mount 00:07:15.643 ************************************ 00:07:15.643 13:48:40 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:07:15.643 13:48:40 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:07:15.643 13:48:40 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:07:15.643 13:48:40 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:15.643 13:48:40 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:15.643 13:48:40 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:07:15.643 13:48:40 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:15.643 13:48:40 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:07:15.643 13:48:40 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:07:15.643 13:48:40 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:15.643 13:48:40 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:07:15.643 13:48:40 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:07:15.643 13:48:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:07:15.643 13:48:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:15.643 13:48:40 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:15.643 13:48:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:15.643 13:48:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:15.643 13:48:40 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:07:15.643 13:48:40 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:15.643 13:48:40 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:07:17.021 Creating new GPT entries in memory. 00:07:17.021 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:17.021 other utilities. 00:07:17.022 13:48:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:07:17.022 13:48:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:17.022 13:48:41 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:17.022 13:48:41 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:17.022 13:48:41 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:07:17.957 Creating new GPT entries in memory. 00:07:17.957 The operation has completed successfully. 00:07:17.957 13:48:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:17.957 13:48:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:17.957 13:48:42 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 59871 00:07:17.957 13:48:42 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:17.957 13:48:42 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:07:17.957 13:48:42 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:17.957 13:48:42 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:07:17.957 13:48:42 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:07:17.958 13:48:42 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:17.958 13:48:42 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:17.958 13:48:42 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:07:17.958 13:48:42 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:07:17.958 13:48:42 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:17.958 13:48:42 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:17.958 13:48:42 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:17.958 13:48:42 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:17.958 13:48:42 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:07:17.958 13:48:42 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:17.958 13:48:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:17.958 13:48:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:07:17.958 13:48:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:17.958 13:48:42 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:17.958 13:48:42 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:17.958 13:48:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:17.958 13:48:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:07:17.958 13:48:42 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:17.958 13:48:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:17.958 13:48:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:17.958 13:48:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:18.216 13:48:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:18.216 13:48:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:18.216 13:48:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:18.216 13:48:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:18.216 13:48:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:18.216 13:48:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:18.474 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:18.732 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:18.732 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:18.732 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:07:18.732 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:18.732 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:18.732 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:18.732 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:07:18.732 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:18.732 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:18.732 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:18.732 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:18.732 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:18.732 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:18.732 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:18.989 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:18.989 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:18.989 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:18.989 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:18.989 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:07:18.989 13:48:43 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:07:18.989 13:48:43 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:18.989 13:48:43 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:07:18.989 13:48:43 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:07:19.246 13:48:43 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:19.246 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:19.246 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:07:19.246 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:07:19.246 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:19.246 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:19.246 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:19.246 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:19.246 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:07:19.246 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:19.246 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:07:19.246 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:19.246 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:19.246 13:48:43 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:19.247 13:48:43 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:19.247 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:19.247 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:07:19.247 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:19.247 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:19.247 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:19.247 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:19.505 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:19.505 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:19.505 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:19.505 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:19.505 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:19.505 13:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:19.764 13:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:19.764 13:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:20.023 13:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:20.023 13:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:07:20.023 13:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:20.023 13:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:20.023 13:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:20.023 13:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:20.023 13:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:07:20.023 13:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:07:20.023 13:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:07:20.023 13:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:07:20.023 13:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:07:20.023 13:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:20.023 13:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:07:20.023 13:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:20.023 13:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:20.023 13:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:07:20.023 13:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:20.023 13:48:44 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:20.023 13:48:44 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:20.591 13:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:20.591 13:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:07:20.591 13:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:20.591 13:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:20.591 13:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:20.591 13:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:20.591 13:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:20.591 13:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:20.591 13:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:20.591 13:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:20.591 13:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:20.591 13:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:20.849 13:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:20.849 13:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:21.108 13:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:21.108 13:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:07:21.108 13:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:07:21.108 13:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:07:21.108 13:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:21.108 13:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:21.108 13:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:21.108 13:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:21.108 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:21.108 00:07:21.108 real 0m5.443s 00:07:21.108 user 0m1.517s 00:07:21.108 sys 0m1.612s 00:07:21.108 13:48:45 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.108 13:48:45 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:07:21.108 ************************************ 00:07:21.108 END TEST nvme_mount 00:07:21.108 ************************************ 00:07:21.108 13:48:45 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:07:21.108 13:48:45 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:07:21.108 13:48:45 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.108 13:48:45 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.108 13:48:45 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:21.108 ************************************ 00:07:21.108 START TEST dm_mount 00:07:21.108 ************************************ 00:07:21.108 13:48:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:07:21.108 13:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:07:21.108 13:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:07:21.108 13:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:07:21.108 13:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:07:21.108 13:48:45 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:21.108 13:48:45 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:07:21.108 13:48:45 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:07:21.108 13:48:45 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:21.108 13:48:45 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:07:21.108 13:48:45 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:07:21.108 13:48:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:07:21.108 13:48:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:21.108 13:48:45 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:21.108 13:48:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:21.108 13:48:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:21.108 13:48:45 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:21.108 13:48:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:21.108 13:48:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:21.108 13:48:45 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:07:21.108 13:48:45 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:21.108 13:48:45 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:07:22.485 Creating new GPT entries in memory. 00:07:22.485 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:22.485 other utilities. 00:07:22.485 13:48:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:07:22.485 13:48:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:22.486 13:48:46 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:22.486 13:48:46 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:22.486 13:48:46 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:07:23.421 Creating new GPT entries in memory. 00:07:23.421 The operation has completed successfully. 00:07:23.421 13:48:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:23.421 13:48:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:23.421 13:48:47 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:23.421 13:48:47 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:23.421 13:48:47 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:07:24.356 The operation has completed successfully. 00:07:24.356 13:48:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:24.356 13:48:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:24.356 13:48:48 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 60507 00:07:24.356 13:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:24.357 13:48:48 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:24.616 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:24.616 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:07:24.616 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:07:24.616 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:24.616 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:24.616 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:24.616 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:24.616 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:24.875 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:24.875 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:24.875 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:24.875 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:25.161 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:25.161 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:25.423 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:25.423 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:07:25.423 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:25.423 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:07:25.423 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:25.423 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:25.423 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:07:25.423 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:07:25.423 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:07:25.423 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:07:25.423 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:07:25.423 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:07:25.423 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:07:25.423 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:07:25.423 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:25.423 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:07:25.423 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:07:25.423 13:48:49 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:25.423 13:48:49 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:25.423 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:25.423 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:07:25.423 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:07:25.423 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:25.423 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:25.423 13:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:25.682 13:48:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:25.682 13:48:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:25.682 13:48:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:25.682 13:48:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:25.682 13:48:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:25.682 13:48:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:25.940 13:48:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:25.940 13:48:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:26.198 13:48:50 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:26.198 13:48:50 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:07:26.198 13:48:50 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:07:26.198 13:48:50 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:07:26.198 13:48:50 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:26.198 13:48:50 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:07:26.198 13:48:50 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:07:26.198 13:48:50 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:26.198 13:48:50 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:07:26.198 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:26.198 13:48:50 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:07:26.198 13:48:50 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:07:26.198 00:07:26.198 real 0m5.073s 00:07:26.198 user 0m0.972s 00:07:26.198 sys 0m1.023s 00:07:26.198 13:48:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.198 ************************************ 00:07:26.198 END TEST dm_mount 00:07:26.198 ************************************ 00:07:26.198 13:48:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:07:26.457 13:48:50 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:07:26.457 13:48:50 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:07:26.457 13:48:50 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:07:26.457 13:48:50 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:26.457 13:48:50 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:26.457 13:48:50 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:26.457 13:48:50 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:26.457 13:48:50 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:26.715 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:26.715 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:26.715 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:26.715 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:26.715 13:48:51 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:07:26.715 13:48:51 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:26.715 13:48:51 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:07:26.715 13:48:51 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:26.715 13:48:51 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:07:26.715 13:48:51 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:07:26.715 13:48:51 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:07:26.715 00:07:26.715 real 0m12.610s 00:07:26.715 user 0m3.455s 00:07:26.715 sys 0m3.465s 00:07:26.715 13:48:51 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.715 13:48:51 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:26.715 ************************************ 00:07:26.715 END TEST devices 00:07:26.715 ************************************ 00:07:26.715 13:48:51 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:07:26.715 00:07:26.715 real 0m44.355s 00:07:26.715 user 0m10.676s 00:07:26.715 sys 0m13.164s 00:07:26.715 13:48:51 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.715 13:48:51 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:26.715 ************************************ 00:07:26.715 END TEST setup.sh 00:07:26.715 ************************************ 00:07:26.715 13:48:51 -- common/autotest_common.sh@1142 -- # return 0 00:07:26.715 13:48:51 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:27.280 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:27.537 Hugepages 00:07:27.537 node hugesize free / total 00:07:27.537 node0 1048576kB 0 / 0 00:07:27.537 node0 2048kB 2048 / 2048 00:07:27.537 00:07:27.537 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:27.795 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:27.795 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:07:27.795 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:07:28.054 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:07:28.054 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:07:28.054 13:48:52 -- spdk/autotest.sh@130 -- # uname -s 00:07:28.054 13:48:52 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:07:28.054 13:48:52 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:07:28.054 13:48:52 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:28.619 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:29.183 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:29.183 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:29.183 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:29.183 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:29.183 13:48:53 -- common/autotest_common.sh@1532 -- # sleep 1 00:07:30.556 13:48:54 -- common/autotest_common.sh@1533 -- # bdfs=() 00:07:30.556 13:48:54 -- common/autotest_common.sh@1533 -- # local bdfs 00:07:30.556 13:48:54 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:07:30.556 13:48:54 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:07:30.556 13:48:54 -- common/autotest_common.sh@1513 -- # bdfs=() 00:07:30.556 13:48:54 -- common/autotest_common.sh@1513 -- # local bdfs 00:07:30.556 13:48:54 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:30.556 13:48:54 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:07:30.556 13:48:54 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:30.556 13:48:54 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:07:30.556 13:48:54 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:30.556 13:48:54 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:30.556 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:30.814 Waiting for block devices as requested 00:07:30.814 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:30.814 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:31.072 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:31.072 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:36.341 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:36.341 13:49:00 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:07:36.341 13:49:00 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:07:36.341 13:49:00 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:07:36.341 13:49:00 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:36.341 13:49:00 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:36.341 13:49:00 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:07:36.341 13:49:00 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:36.341 13:49:00 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:07:36.341 13:49:00 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:07:36.341 13:49:00 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:07:36.342 13:49:00 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:07:36.342 13:49:00 -- common/autotest_common.sh@1545 -- # grep oacs 00:07:36.342 13:49:00 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:07:36.342 13:49:00 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:07:36.342 13:49:00 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:07:36.342 13:49:00 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:07:36.342 13:49:00 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:07:36.342 13:49:00 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:07:36.342 13:49:00 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:07:36.342 13:49:00 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:07:36.342 13:49:00 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:07:36.342 13:49:00 -- common/autotest_common.sh@1557 -- # continue 00:07:36.342 13:49:00 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:07:36.342 13:49:00 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:07:36.342 13:49:00 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:36.342 13:49:00 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:07:36.342 13:49:00 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:36.342 13:49:00 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:07:36.342 13:49:00 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:36.342 13:49:00 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:07:36.342 13:49:00 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:07:36.342 13:49:00 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:07:36.342 13:49:00 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:07:36.342 13:49:00 -- common/autotest_common.sh@1545 -- # grep oacs 00:07:36.342 13:49:00 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:07:36.342 13:49:00 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:07:36.342 13:49:00 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:07:36.342 13:49:00 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:07:36.342 13:49:00 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:07:36.342 13:49:00 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:07:36.342 13:49:00 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:07:36.342 13:49:00 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:07:36.342 13:49:00 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:07:36.342 13:49:00 -- common/autotest_common.sh@1557 -- # continue 00:07:36.342 13:49:00 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:07:36.342 13:49:00 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:07:36.342 13:49:00 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:36.342 13:49:00 -- common/autotest_common.sh@1502 -- # grep 0000:00:12.0/nvme/nvme 00:07:36.342 13:49:00 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:07:36.342 13:49:00 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:07:36.342 13:49:00 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:07:36.342 13:49:00 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme2 00:07:36.342 13:49:00 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme2 00:07:36.342 13:49:00 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme2 ]] 00:07:36.342 13:49:00 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme2 00:07:36.342 13:49:00 -- common/autotest_common.sh@1545 -- # grep oacs 00:07:36.342 13:49:00 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:07:36.342 13:49:00 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:07:36.342 13:49:00 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:07:36.342 13:49:00 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:07:36.342 13:49:00 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:07:36.342 13:49:00 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme2 00:07:36.342 13:49:00 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:07:36.342 13:49:00 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:07:36.342 13:49:00 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:07:36.342 13:49:00 -- common/autotest_common.sh@1557 -- # continue 00:07:36.342 13:49:00 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:07:36.342 13:49:00 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:07:36.342 13:49:00 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:36.342 13:49:00 -- common/autotest_common.sh@1502 -- # grep 0000:00:13.0/nvme/nvme 00:07:36.342 13:49:00 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:07:36.342 13:49:00 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:07:36.342 13:49:00 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:07:36.342 13:49:00 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme3 00:07:36.342 13:49:00 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme3 00:07:36.342 13:49:00 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme3 ]] 00:07:36.342 13:49:00 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme3 00:07:36.342 13:49:00 -- common/autotest_common.sh@1545 -- # grep oacs 00:07:36.342 13:49:00 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:07:36.342 13:49:00 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:07:36.342 13:49:00 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:07:36.342 13:49:00 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:07:36.342 13:49:00 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme3 00:07:36.342 13:49:00 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:07:36.342 13:49:00 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:07:36.342 13:49:00 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:07:36.342 13:49:00 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:07:36.342 13:49:00 -- common/autotest_common.sh@1557 -- # continue 00:07:36.342 13:49:00 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:07:36.342 13:49:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:36.342 13:49:00 -- common/autotest_common.sh@10 -- # set +x 00:07:36.342 13:49:00 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:07:36.342 13:49:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:36.342 13:49:00 -- common/autotest_common.sh@10 -- # set +x 00:07:36.342 13:49:00 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:36.909 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:37.475 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:37.475 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:37.475 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:37.475 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:37.475 13:49:01 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:07:37.475 13:49:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:37.475 13:49:01 -- common/autotest_common.sh@10 -- # set +x 00:07:37.475 13:49:01 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:07:37.475 13:49:01 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:07:37.475 13:49:01 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:07:37.475 13:49:01 -- common/autotest_common.sh@1577 -- # bdfs=() 00:07:37.475 13:49:01 -- common/autotest_common.sh@1577 -- # local bdfs 00:07:37.475 13:49:01 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:07:37.475 13:49:01 -- common/autotest_common.sh@1513 -- # bdfs=() 00:07:37.475 13:49:01 -- common/autotest_common.sh@1513 -- # local bdfs 00:07:37.475 13:49:01 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:37.475 13:49:01 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:37.475 13:49:01 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:07:37.733 13:49:02 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:07:37.734 13:49:02 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:37.734 13:49:02 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:07:37.734 13:49:02 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:07:37.734 13:49:02 -- common/autotest_common.sh@1580 -- # device=0x0010 00:07:37.734 13:49:02 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:37.734 13:49:02 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:07:37.734 13:49:02 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:07:37.734 13:49:02 -- common/autotest_common.sh@1580 -- # device=0x0010 00:07:37.734 13:49:02 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:37.734 13:49:02 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:07:37.734 13:49:02 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:07:37.734 13:49:02 -- common/autotest_common.sh@1580 -- # device=0x0010 00:07:37.734 13:49:02 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:37.734 13:49:02 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:07:37.734 13:49:02 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:07:37.734 13:49:02 -- common/autotest_common.sh@1580 -- # device=0x0010 00:07:37.734 13:49:02 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:37.734 13:49:02 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:07:37.734 13:49:02 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:07:37.734 13:49:02 -- common/autotest_common.sh@1593 -- # return 0 00:07:37.734 13:49:02 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:07:37.734 13:49:02 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:07:37.734 13:49:02 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:07:37.734 13:49:02 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:07:37.734 13:49:02 -- spdk/autotest.sh@162 -- # timing_enter lib 00:07:37.734 13:49:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:37.734 13:49:02 -- common/autotest_common.sh@10 -- # set +x 00:07:37.734 13:49:02 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:07:37.734 13:49:02 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:37.734 13:49:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:37.734 13:49:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.734 13:49:02 -- common/autotest_common.sh@10 -- # set +x 00:07:37.734 ************************************ 00:07:37.734 START TEST env 00:07:37.734 ************************************ 00:07:37.734 13:49:02 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:37.734 * Looking for test storage... 00:07:37.734 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:37.734 13:49:02 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:37.734 13:49:02 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:37.734 13:49:02 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.734 13:49:02 env -- common/autotest_common.sh@10 -- # set +x 00:07:37.734 ************************************ 00:07:37.734 START TEST env_memory 00:07:37.734 ************************************ 00:07:37.734 13:49:02 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:37.734 00:07:37.734 00:07:37.734 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.734 http://cunit.sourceforge.net/ 00:07:37.734 00:07:37.734 00:07:37.734 Suite: memory 00:07:37.734 Test: alloc and free memory map ...[2024-07-15 13:49:02.237090] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:37.992 passed 00:07:37.992 Test: mem map translation ...[2024-07-15 13:49:02.303536] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:37.992 [2024-07-15 13:49:02.303636] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:37.992 [2024-07-15 13:49:02.303737] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:37.992 [2024-07-15 13:49:02.303770] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:37.992 passed 00:07:37.992 Test: mem map registration ...[2024-07-15 13:49:02.402834] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:07:37.992 [2024-07-15 13:49:02.402929] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:07:37.992 passed 00:07:38.251 Test: mem map adjacent registrations ...passed 00:07:38.251 00:07:38.251 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.251 suites 1 1 n/a 0 0 00:07:38.251 tests 4 4 4 0 0 00:07:38.251 asserts 152 152 152 0 n/a 00:07:38.251 00:07:38.251 Elapsed time = 0.353 seconds 00:07:38.251 00:07:38.251 real 0m0.392s 00:07:38.251 user 0m0.355s 00:07:38.251 sys 0m0.027s 00:07:38.251 13:49:02 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.251 ************************************ 00:07:38.251 END TEST env_memory 00:07:38.251 ************************************ 00:07:38.251 13:49:02 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:38.251 13:49:02 env -- common/autotest_common.sh@1142 -- # return 0 00:07:38.251 13:49:02 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:38.251 13:49:02 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:38.251 13:49:02 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.251 13:49:02 env -- common/autotest_common.sh@10 -- # set +x 00:07:38.251 ************************************ 00:07:38.251 START TEST env_vtophys 00:07:38.251 ************************************ 00:07:38.251 13:49:02 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:38.251 EAL: lib.eal log level changed from notice to debug 00:07:38.251 EAL: Detected lcore 0 as core 0 on socket 0 00:07:38.251 EAL: Detected lcore 1 as core 0 on socket 0 00:07:38.251 EAL: Detected lcore 2 as core 0 on socket 0 00:07:38.251 EAL: Detected lcore 3 as core 0 on socket 0 00:07:38.251 EAL: Detected lcore 4 as core 0 on socket 0 00:07:38.251 EAL: Detected lcore 5 as core 0 on socket 0 00:07:38.251 EAL: Detected lcore 6 as core 0 on socket 0 00:07:38.251 EAL: Detected lcore 7 as core 0 on socket 0 00:07:38.251 EAL: Detected lcore 8 as core 0 on socket 0 00:07:38.251 EAL: Detected lcore 9 as core 0 on socket 0 00:07:38.251 EAL: Maximum logical cores by configuration: 128 00:07:38.251 EAL: Detected CPU lcores: 10 00:07:38.251 EAL: Detected NUMA nodes: 1 00:07:38.251 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:38.251 EAL: Detected shared linkage of DPDK 00:07:38.251 EAL: No shared files mode enabled, IPC will be disabled 00:07:38.251 EAL: Selected IOVA mode 'PA' 00:07:38.251 EAL: Probing VFIO support... 00:07:38.251 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:38.251 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:38.251 EAL: Ask a virtual area of 0x2e000 bytes 00:07:38.251 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:38.251 EAL: Setting up physically contiguous memory... 00:07:38.251 EAL: Setting maximum number of open files to 524288 00:07:38.251 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:38.251 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:38.251 EAL: Ask a virtual area of 0x61000 bytes 00:07:38.251 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:38.251 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:38.251 EAL: Ask a virtual area of 0x400000000 bytes 00:07:38.251 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:38.251 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:38.251 EAL: Ask a virtual area of 0x61000 bytes 00:07:38.251 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:38.251 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:38.251 EAL: Ask a virtual area of 0x400000000 bytes 00:07:38.251 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:38.251 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:38.251 EAL: Ask a virtual area of 0x61000 bytes 00:07:38.251 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:38.251 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:38.251 EAL: Ask a virtual area of 0x400000000 bytes 00:07:38.251 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:38.251 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:38.251 EAL: Ask a virtual area of 0x61000 bytes 00:07:38.251 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:38.251 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:38.251 EAL: Ask a virtual area of 0x400000000 bytes 00:07:38.251 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:38.251 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:38.251 EAL: Hugepages will be freed exactly as allocated. 00:07:38.251 EAL: No shared files mode enabled, IPC is disabled 00:07:38.251 EAL: No shared files mode enabled, IPC is disabled 00:07:38.510 EAL: TSC frequency is ~2200000 KHz 00:07:38.510 EAL: Main lcore 0 is ready (tid=7f8e51811a40;cpuset=[0]) 00:07:38.510 EAL: Trying to obtain current memory policy. 00:07:38.510 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:38.510 EAL: Restoring previous memory policy: 0 00:07:38.510 EAL: request: mp_malloc_sync 00:07:38.510 EAL: No shared files mode enabled, IPC is disabled 00:07:38.510 EAL: Heap on socket 0 was expanded by 2MB 00:07:38.510 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:38.510 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:38.510 EAL: Mem event callback 'spdk:(nil)' registered 00:07:38.510 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:38.510 00:07:38.510 00:07:38.510 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.510 http://cunit.sourceforge.net/ 00:07:38.510 00:07:38.510 00:07:38.510 Suite: components_suite 00:07:38.769 Test: vtophys_malloc_test ...passed 00:07:38.769 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:38.769 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:38.769 EAL: Restoring previous memory policy: 4 00:07:38.769 EAL: Calling mem event callback 'spdk:(nil)' 00:07:38.769 EAL: request: mp_malloc_sync 00:07:38.769 EAL: No shared files mode enabled, IPC is disabled 00:07:38.769 EAL: Heap on socket 0 was expanded by 4MB 00:07:38.769 EAL: Calling mem event callback 'spdk:(nil)' 00:07:38.769 EAL: request: mp_malloc_sync 00:07:38.769 EAL: No shared files mode enabled, IPC is disabled 00:07:38.769 EAL: Heap on socket 0 was shrunk by 4MB 00:07:38.769 EAL: Trying to obtain current memory policy. 00:07:38.769 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:38.769 EAL: Restoring previous memory policy: 4 00:07:38.769 EAL: Calling mem event callback 'spdk:(nil)' 00:07:38.769 EAL: request: mp_malloc_sync 00:07:38.769 EAL: No shared files mode enabled, IPC is disabled 00:07:38.769 EAL: Heap on socket 0 was expanded by 6MB 00:07:38.769 EAL: Calling mem event callback 'spdk:(nil)' 00:07:38.769 EAL: request: mp_malloc_sync 00:07:38.769 EAL: No shared files mode enabled, IPC is disabled 00:07:38.769 EAL: Heap on socket 0 was shrunk by 6MB 00:07:38.769 EAL: Trying to obtain current memory policy. 00:07:38.769 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:38.769 EAL: Restoring previous memory policy: 4 00:07:38.769 EAL: Calling mem event callback 'spdk:(nil)' 00:07:38.769 EAL: request: mp_malloc_sync 00:07:38.769 EAL: No shared files mode enabled, IPC is disabled 00:07:38.769 EAL: Heap on socket 0 was expanded by 10MB 00:07:38.769 EAL: Calling mem event callback 'spdk:(nil)' 00:07:38.769 EAL: request: mp_malloc_sync 00:07:38.769 EAL: No shared files mode enabled, IPC is disabled 00:07:38.769 EAL: Heap on socket 0 was shrunk by 10MB 00:07:38.769 EAL: Trying to obtain current memory policy. 00:07:38.769 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:38.769 EAL: Restoring previous memory policy: 4 00:07:38.769 EAL: Calling mem event callback 'spdk:(nil)' 00:07:38.769 EAL: request: mp_malloc_sync 00:07:38.769 EAL: No shared files mode enabled, IPC is disabled 00:07:38.769 EAL: Heap on socket 0 was expanded by 18MB 00:07:39.027 EAL: Calling mem event callback 'spdk:(nil)' 00:07:39.027 EAL: request: mp_malloc_sync 00:07:39.027 EAL: No shared files mode enabled, IPC is disabled 00:07:39.027 EAL: Heap on socket 0 was shrunk by 18MB 00:07:39.027 EAL: Trying to obtain current memory policy. 00:07:39.027 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:39.027 EAL: Restoring previous memory policy: 4 00:07:39.027 EAL: Calling mem event callback 'spdk:(nil)' 00:07:39.027 EAL: request: mp_malloc_sync 00:07:39.027 EAL: No shared files mode enabled, IPC is disabled 00:07:39.027 EAL: Heap on socket 0 was expanded by 34MB 00:07:39.027 EAL: Calling mem event callback 'spdk:(nil)' 00:07:39.027 EAL: request: mp_malloc_sync 00:07:39.027 EAL: No shared files mode enabled, IPC is disabled 00:07:39.027 EAL: Heap on socket 0 was shrunk by 34MB 00:07:39.027 EAL: Trying to obtain current memory policy. 00:07:39.027 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:39.027 EAL: Restoring previous memory policy: 4 00:07:39.027 EAL: Calling mem event callback 'spdk:(nil)' 00:07:39.027 EAL: request: mp_malloc_sync 00:07:39.027 EAL: No shared files mode enabled, IPC is disabled 00:07:39.027 EAL: Heap on socket 0 was expanded by 66MB 00:07:39.286 EAL: Calling mem event callback 'spdk:(nil)' 00:07:39.286 EAL: request: mp_malloc_sync 00:07:39.286 EAL: No shared files mode enabled, IPC is disabled 00:07:39.286 EAL: Heap on socket 0 was shrunk by 66MB 00:07:39.286 EAL: Trying to obtain current memory policy. 00:07:39.286 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:39.286 EAL: Restoring previous memory policy: 4 00:07:39.286 EAL: Calling mem event callback 'spdk:(nil)' 00:07:39.286 EAL: request: mp_malloc_sync 00:07:39.286 EAL: No shared files mode enabled, IPC is disabled 00:07:39.286 EAL: Heap on socket 0 was expanded by 130MB 00:07:39.544 EAL: Calling mem event callback 'spdk:(nil)' 00:07:39.544 EAL: request: mp_malloc_sync 00:07:39.544 EAL: No shared files mode enabled, IPC is disabled 00:07:39.544 EAL: Heap on socket 0 was shrunk by 130MB 00:07:39.802 EAL: Trying to obtain current memory policy. 00:07:39.802 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:39.802 EAL: Restoring previous memory policy: 4 00:07:39.802 EAL: Calling mem event callback 'spdk:(nil)' 00:07:39.802 EAL: request: mp_malloc_sync 00:07:39.802 EAL: No shared files mode enabled, IPC is disabled 00:07:39.802 EAL: Heap on socket 0 was expanded by 258MB 00:07:40.061 EAL: Calling mem event callback 'spdk:(nil)' 00:07:40.061 EAL: request: mp_malloc_sync 00:07:40.061 EAL: No shared files mode enabled, IPC is disabled 00:07:40.061 EAL: Heap on socket 0 was shrunk by 258MB 00:07:40.627 EAL: Trying to obtain current memory policy. 00:07:40.627 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:40.627 EAL: Restoring previous memory policy: 4 00:07:40.627 EAL: Calling mem event callback 'spdk:(nil)' 00:07:40.627 EAL: request: mp_malloc_sync 00:07:40.627 EAL: No shared files mode enabled, IPC is disabled 00:07:40.627 EAL: Heap on socket 0 was expanded by 514MB 00:07:41.561 EAL: Calling mem event callback 'spdk:(nil)' 00:07:41.561 EAL: request: mp_malloc_sync 00:07:41.561 EAL: No shared files mode enabled, IPC is disabled 00:07:41.561 EAL: Heap on socket 0 was shrunk by 514MB 00:07:42.128 EAL: Trying to obtain current memory policy. 00:07:42.128 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:42.388 EAL: Restoring previous memory policy: 4 00:07:42.388 EAL: Calling mem event callback 'spdk:(nil)' 00:07:42.388 EAL: request: mp_malloc_sync 00:07:42.388 EAL: No shared files mode enabled, IPC is disabled 00:07:42.388 EAL: Heap on socket 0 was expanded by 1026MB 00:07:43.764 EAL: Calling mem event callback 'spdk:(nil)' 00:07:44.023 EAL: request: mp_malloc_sync 00:07:44.023 EAL: No shared files mode enabled, IPC is disabled 00:07:44.023 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:45.399 passed 00:07:45.399 00:07:45.399 Run Summary: Type Total Ran Passed Failed Inactive 00:07:45.399 suites 1 1 n/a 0 0 00:07:45.399 tests 2 2 2 0 0 00:07:45.399 asserts 5320 5320 5320 0 n/a 00:07:45.399 00:07:45.399 Elapsed time = 6.906 seconds 00:07:45.399 EAL: Calling mem event callback 'spdk:(nil)' 00:07:45.399 EAL: request: mp_malloc_sync 00:07:45.399 EAL: No shared files mode enabled, IPC is disabled 00:07:45.399 EAL: Heap on socket 0 was shrunk by 2MB 00:07:45.399 EAL: No shared files mode enabled, IPC is disabled 00:07:45.399 EAL: No shared files mode enabled, IPC is disabled 00:07:45.399 EAL: No shared files mode enabled, IPC is disabled 00:07:45.399 00:07:45.399 real 0m7.235s 00:07:45.399 user 0m6.350s 00:07:45.399 sys 0m0.717s 00:07:45.399 13:49:09 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.399 13:49:09 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:45.399 ************************************ 00:07:45.399 END TEST env_vtophys 00:07:45.399 ************************************ 00:07:45.399 13:49:09 env -- common/autotest_common.sh@1142 -- # return 0 00:07:45.399 13:49:09 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:45.399 13:49:09 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:45.399 13:49:09 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.399 13:49:09 env -- common/autotest_common.sh@10 -- # set +x 00:07:45.399 ************************************ 00:07:45.399 START TEST env_pci 00:07:45.399 ************************************ 00:07:45.399 13:49:09 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:45.399 00:07:45.399 00:07:45.399 CUnit - A unit testing framework for C - Version 2.1-3 00:07:45.399 http://cunit.sourceforge.net/ 00:07:45.399 00:07:45.399 00:07:45.399 Suite: pci 00:07:45.399 Test: pci_hook ...[2024-07-15 13:49:09.914497] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 62323 has claimed it 00:07:45.399 EAL: Cannot find device (10000:00:01.0) 00:07:45.399 EAL: Failed to attach device on primary process 00:07:45.399 passed 00:07:45.399 00:07:45.399 Run Summary: Type Total Ran Passed Failed Inactive 00:07:45.399 suites 1 1 n/a 0 0 00:07:45.399 tests 1 1 1 0 0 00:07:45.399 asserts 25 25 25 0 n/a 00:07:45.399 00:07:45.399 Elapsed time = 0.007 seconds 00:07:45.658 00:07:45.658 real 0m0.076s 00:07:45.658 user 0m0.036s 00:07:45.658 sys 0m0.040s 00:07:45.658 13:49:09 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.658 13:49:09 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:45.658 ************************************ 00:07:45.658 END TEST env_pci 00:07:45.658 ************************************ 00:07:45.658 13:49:09 env -- common/autotest_common.sh@1142 -- # return 0 00:07:45.658 13:49:09 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:45.658 13:49:09 env -- env/env.sh@15 -- # uname 00:07:45.658 13:49:09 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:45.658 13:49:09 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:45.658 13:49:09 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:45.658 13:49:09 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:45.658 13:49:09 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.658 13:49:09 env -- common/autotest_common.sh@10 -- # set +x 00:07:45.658 ************************************ 00:07:45.658 START TEST env_dpdk_post_init 00:07:45.658 ************************************ 00:07:45.658 13:49:10 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:45.658 EAL: Detected CPU lcores: 10 00:07:45.658 EAL: Detected NUMA nodes: 1 00:07:45.658 EAL: Detected shared linkage of DPDK 00:07:45.658 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:45.658 EAL: Selected IOVA mode 'PA' 00:07:45.915 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:45.915 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:07:45.915 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:07:45.915 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:07:45.915 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:07:45.915 Starting DPDK initialization... 00:07:45.915 Starting SPDK post initialization... 00:07:45.915 SPDK NVMe probe 00:07:45.915 Attaching to 0000:00:10.0 00:07:45.915 Attaching to 0000:00:11.0 00:07:45.915 Attaching to 0000:00:12.0 00:07:45.915 Attaching to 0000:00:13.0 00:07:45.915 Attached to 0000:00:11.0 00:07:45.915 Attached to 0000:00:13.0 00:07:45.915 Attached to 0000:00:10.0 00:07:45.915 Attached to 0000:00:12.0 00:07:45.915 Cleaning up... 00:07:45.915 00:07:45.915 real 0m0.309s 00:07:45.915 user 0m0.125s 00:07:45.915 sys 0m0.087s 00:07:45.915 13:49:10 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.915 ************************************ 00:07:45.915 END TEST env_dpdk_post_init 00:07:45.915 13:49:10 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:45.915 ************************************ 00:07:45.915 13:49:10 env -- common/autotest_common.sh@1142 -- # return 0 00:07:45.915 13:49:10 env -- env/env.sh@26 -- # uname 00:07:45.915 13:49:10 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:45.915 13:49:10 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:45.915 13:49:10 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:45.915 13:49:10 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.915 13:49:10 env -- common/autotest_common.sh@10 -- # set +x 00:07:45.915 ************************************ 00:07:45.916 START TEST env_mem_callbacks 00:07:45.916 ************************************ 00:07:45.916 13:49:10 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:45.916 EAL: Detected CPU lcores: 10 00:07:45.916 EAL: Detected NUMA nodes: 1 00:07:45.916 EAL: Detected shared linkage of DPDK 00:07:45.916 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:45.916 EAL: Selected IOVA mode 'PA' 00:07:46.173 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:46.173 00:07:46.173 00:07:46.173 CUnit - A unit testing framework for C - Version 2.1-3 00:07:46.173 http://cunit.sourceforge.net/ 00:07:46.173 00:07:46.173 00:07:46.173 Suite: memory 00:07:46.173 Test: test ... 00:07:46.173 register 0x200000200000 2097152 00:07:46.173 malloc 3145728 00:07:46.173 register 0x200000400000 4194304 00:07:46.173 buf 0x2000004fffc0 len 3145728 PASSED 00:07:46.173 malloc 64 00:07:46.173 buf 0x2000004ffec0 len 64 PASSED 00:07:46.173 malloc 4194304 00:07:46.173 register 0x200000800000 6291456 00:07:46.173 buf 0x2000009fffc0 len 4194304 PASSED 00:07:46.173 free 0x2000004fffc0 3145728 00:07:46.173 free 0x2000004ffec0 64 00:07:46.173 unregister 0x200000400000 4194304 PASSED 00:07:46.173 free 0x2000009fffc0 4194304 00:07:46.173 unregister 0x200000800000 6291456 PASSED 00:07:46.173 malloc 8388608 00:07:46.173 register 0x200000400000 10485760 00:07:46.173 buf 0x2000005fffc0 len 8388608 PASSED 00:07:46.173 free 0x2000005fffc0 8388608 00:07:46.173 unregister 0x200000400000 10485760 PASSED 00:07:46.173 passed 00:07:46.173 00:07:46.173 Run Summary: Type Total Ran Passed Failed Inactive 00:07:46.173 suites 1 1 n/a 0 0 00:07:46.173 tests 1 1 1 0 0 00:07:46.173 asserts 15 15 15 0 n/a 00:07:46.173 00:07:46.173 Elapsed time = 0.059 seconds 00:07:46.173 00:07:46.173 real 0m0.252s 00:07:46.173 user 0m0.094s 00:07:46.173 sys 0m0.056s 00:07:46.173 13:49:10 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.173 13:49:10 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:46.173 ************************************ 00:07:46.173 END TEST env_mem_callbacks 00:07:46.173 ************************************ 00:07:46.173 13:49:10 env -- common/autotest_common.sh@1142 -- # return 0 00:07:46.173 00:07:46.173 real 0m8.572s 00:07:46.173 user 0m7.055s 00:07:46.173 sys 0m1.128s 00:07:46.173 13:49:10 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.173 13:49:10 env -- common/autotest_common.sh@10 -- # set +x 00:07:46.173 ************************************ 00:07:46.173 END TEST env 00:07:46.173 ************************************ 00:07:46.173 13:49:10 -- common/autotest_common.sh@1142 -- # return 0 00:07:46.173 13:49:10 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:46.173 13:49:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:46.173 13:49:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.173 13:49:10 -- common/autotest_common.sh@10 -- # set +x 00:07:46.173 ************************************ 00:07:46.173 START TEST rpc 00:07:46.173 ************************************ 00:07:46.173 13:49:10 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:46.431 * Looking for test storage... 00:07:46.431 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:46.431 13:49:10 rpc -- rpc/rpc.sh@65 -- # spdk_pid=62442 00:07:46.431 13:49:10 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:46.431 13:49:10 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:46.431 13:49:10 rpc -- rpc/rpc.sh@67 -- # waitforlisten 62442 00:07:46.431 13:49:10 rpc -- common/autotest_common.sh@829 -- # '[' -z 62442 ']' 00:07:46.431 13:49:10 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.431 13:49:10 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:46.431 13:49:10 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.431 13:49:10 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:46.431 13:49:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.431 [2024-07-15 13:49:10.947918] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:46.431 [2024-07-15 13:49:10.948085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62442 ] 00:07:46.688 [2024-07-15 13:49:11.124216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.945 [2024-07-15 13:49:11.310489] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:46.945 [2024-07-15 13:49:11.310554] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 62442' to capture a snapshot of events at runtime. 00:07:46.945 [2024-07-15 13:49:11.310575] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:46.946 [2024-07-15 13:49:11.310587] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:46.946 [2024-07-15 13:49:11.310601] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid62442 for offline analysis/debug. 00:07:46.946 [2024-07-15 13:49:11.310662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.511 13:49:12 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:47.511 13:49:12 rpc -- common/autotest_common.sh@862 -- # return 0 00:07:47.511 13:49:12 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:47.511 13:49:12 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:47.511 13:49:12 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:47.511 13:49:12 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:47.511 13:49:12 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:47.511 13:49:12 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.511 13:49:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.511 ************************************ 00:07:47.511 START TEST rpc_integrity 00:07:47.511 ************************************ 00:07:47.511 13:49:12 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:07:47.511 13:49:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:47.511 13:49:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.511 13:49:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:47.511 13:49:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.511 13:49:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:47.511 13:49:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:47.769 13:49:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:47.769 13:49:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:47.769 13:49:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.769 13:49:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:47.769 13:49:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.769 13:49:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:47.769 13:49:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:47.769 13:49:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.769 13:49:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:47.769 13:49:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.769 13:49:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:47.769 { 00:07:47.769 "name": "Malloc0", 00:07:47.769 "aliases": [ 00:07:47.769 "c0e92086-252f-4d58-bffc-6065551201f9" 00:07:47.769 ], 00:07:47.769 "product_name": "Malloc disk", 00:07:47.769 "block_size": 512, 00:07:47.769 "num_blocks": 16384, 00:07:47.769 "uuid": "c0e92086-252f-4d58-bffc-6065551201f9", 00:07:47.769 "assigned_rate_limits": { 00:07:47.769 "rw_ios_per_sec": 0, 00:07:47.769 "rw_mbytes_per_sec": 0, 00:07:47.769 "r_mbytes_per_sec": 0, 00:07:47.769 "w_mbytes_per_sec": 0 00:07:47.769 }, 00:07:47.769 "claimed": false, 00:07:47.769 "zoned": false, 00:07:47.769 "supported_io_types": { 00:07:47.769 "read": true, 00:07:47.769 "write": true, 00:07:47.769 "unmap": true, 00:07:47.769 "flush": true, 00:07:47.769 "reset": true, 00:07:47.769 "nvme_admin": false, 00:07:47.769 "nvme_io": false, 00:07:47.769 "nvme_io_md": false, 00:07:47.769 "write_zeroes": true, 00:07:47.769 "zcopy": true, 00:07:47.769 "get_zone_info": false, 00:07:47.769 "zone_management": false, 00:07:47.769 "zone_append": false, 00:07:47.769 "compare": false, 00:07:47.769 "compare_and_write": false, 00:07:47.769 "abort": true, 00:07:47.769 "seek_hole": false, 00:07:47.769 "seek_data": false, 00:07:47.769 "copy": true, 00:07:47.769 "nvme_iov_md": false 00:07:47.769 }, 00:07:47.769 "memory_domains": [ 00:07:47.769 { 00:07:47.769 "dma_device_id": "system", 00:07:47.769 "dma_device_type": 1 00:07:47.769 }, 00:07:47.769 { 00:07:47.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.769 "dma_device_type": 2 00:07:47.769 } 00:07:47.769 ], 00:07:47.769 "driver_specific": {} 00:07:47.769 } 00:07:47.769 ]' 00:07:47.769 13:49:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:47.769 13:49:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:47.769 13:49:12 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:47.769 13:49:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.769 13:49:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:47.769 [2024-07-15 13:49:12.176661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:47.769 [2024-07-15 13:49:12.176744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.769 [2024-07-15 13:49:12.176785] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:47.769 [2024-07-15 13:49:12.176801] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.769 [2024-07-15 13:49:12.179460] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.769 [2024-07-15 13:49:12.179507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:47.769 Passthru0 00:07:47.769 13:49:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.769 13:49:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:47.769 13:49:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.769 13:49:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:47.769 13:49:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.769 13:49:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:47.769 { 00:07:47.769 "name": "Malloc0", 00:07:47.769 "aliases": [ 00:07:47.769 "c0e92086-252f-4d58-bffc-6065551201f9" 00:07:47.769 ], 00:07:47.769 "product_name": "Malloc disk", 00:07:47.769 "block_size": 512, 00:07:47.769 "num_blocks": 16384, 00:07:47.769 "uuid": "c0e92086-252f-4d58-bffc-6065551201f9", 00:07:47.769 "assigned_rate_limits": { 00:07:47.769 "rw_ios_per_sec": 0, 00:07:47.769 "rw_mbytes_per_sec": 0, 00:07:47.769 "r_mbytes_per_sec": 0, 00:07:47.769 "w_mbytes_per_sec": 0 00:07:47.769 }, 00:07:47.769 "claimed": true, 00:07:47.769 "claim_type": "exclusive_write", 00:07:47.769 "zoned": false, 00:07:47.769 "supported_io_types": { 00:07:47.769 "read": true, 00:07:47.769 "write": true, 00:07:47.769 "unmap": true, 00:07:47.769 "flush": true, 00:07:47.769 "reset": true, 00:07:47.769 "nvme_admin": false, 00:07:47.769 "nvme_io": false, 00:07:47.769 "nvme_io_md": false, 00:07:47.769 "write_zeroes": true, 00:07:47.769 "zcopy": true, 00:07:47.769 "get_zone_info": false, 00:07:47.769 "zone_management": false, 00:07:47.769 "zone_append": false, 00:07:47.769 "compare": false, 00:07:47.769 "compare_and_write": false, 00:07:47.769 "abort": true, 00:07:47.769 "seek_hole": false, 00:07:47.769 "seek_data": false, 00:07:47.769 "copy": true, 00:07:47.769 "nvme_iov_md": false 00:07:47.769 }, 00:07:47.769 "memory_domains": [ 00:07:47.769 { 00:07:47.769 "dma_device_id": "system", 00:07:47.769 "dma_device_type": 1 00:07:47.769 }, 00:07:47.769 { 00:07:47.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.769 "dma_device_type": 2 00:07:47.769 } 00:07:47.769 ], 00:07:47.769 "driver_specific": {} 00:07:47.769 }, 00:07:47.769 { 00:07:47.769 "name": "Passthru0", 00:07:47.769 "aliases": [ 00:07:47.769 "ae8ddc1e-3699-5a8f-bdbc-d8c7fb53f3b3" 00:07:47.769 ], 00:07:47.769 "product_name": "passthru", 00:07:47.769 "block_size": 512, 00:07:47.769 "num_blocks": 16384, 00:07:47.769 "uuid": "ae8ddc1e-3699-5a8f-bdbc-d8c7fb53f3b3", 00:07:47.769 "assigned_rate_limits": { 00:07:47.769 "rw_ios_per_sec": 0, 00:07:47.769 "rw_mbytes_per_sec": 0, 00:07:47.769 "r_mbytes_per_sec": 0, 00:07:47.769 "w_mbytes_per_sec": 0 00:07:47.769 }, 00:07:47.769 "claimed": false, 00:07:47.769 "zoned": false, 00:07:47.769 "supported_io_types": { 00:07:47.769 "read": true, 00:07:47.769 "write": true, 00:07:47.769 "unmap": true, 00:07:47.769 "flush": true, 00:07:47.769 "reset": true, 00:07:47.769 "nvme_admin": false, 00:07:47.769 "nvme_io": false, 00:07:47.769 "nvme_io_md": false, 00:07:47.769 "write_zeroes": true, 00:07:47.769 "zcopy": true, 00:07:47.769 "get_zone_info": false, 00:07:47.769 "zone_management": false, 00:07:47.769 "zone_append": false, 00:07:47.769 "compare": false, 00:07:47.769 "compare_and_write": false, 00:07:47.769 "abort": true, 00:07:47.769 "seek_hole": false, 00:07:47.769 "seek_data": false, 00:07:47.769 "copy": true, 00:07:47.769 "nvme_iov_md": false 00:07:47.769 }, 00:07:47.769 "memory_domains": [ 00:07:47.769 { 00:07:47.769 "dma_device_id": "system", 00:07:47.769 "dma_device_type": 1 00:07:47.769 }, 00:07:47.769 { 00:07:47.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.769 "dma_device_type": 2 00:07:47.769 } 00:07:47.769 ], 00:07:47.769 "driver_specific": { 00:07:47.769 "passthru": { 00:07:47.769 "name": "Passthru0", 00:07:47.769 "base_bdev_name": "Malloc0" 00:07:47.769 } 00:07:47.769 } 00:07:47.769 } 00:07:47.769 ]' 00:07:47.769 13:49:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:47.769 13:49:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:47.769 13:49:12 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:47.769 13:49:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.769 13:49:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:47.769 13:49:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.769 13:49:12 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:47.769 13:49:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.769 13:49:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:47.769 13:49:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.769 13:49:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:47.769 13:49:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.769 13:49:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:47.769 13:49:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.769 13:49:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:47.769 13:49:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:48.027 ************************************ 00:07:48.027 END TEST rpc_integrity 00:07:48.027 ************************************ 00:07:48.027 13:49:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:48.027 00:07:48.027 real 0m0.334s 00:07:48.027 user 0m0.206s 00:07:48.027 sys 0m0.043s 00:07:48.027 13:49:12 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.027 13:49:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:48.027 13:49:12 rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:48.027 13:49:12 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:48.027 13:49:12 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:48.027 13:49:12 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.027 13:49:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.027 ************************************ 00:07:48.027 START TEST rpc_plugins 00:07:48.027 ************************************ 00:07:48.027 13:49:12 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:07:48.027 13:49:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:48.027 13:49:12 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.027 13:49:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:48.027 13:49:12 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.027 13:49:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:48.027 13:49:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:48.027 13:49:12 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.027 13:49:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:48.027 13:49:12 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.027 13:49:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:48.027 { 00:07:48.027 "name": "Malloc1", 00:07:48.027 "aliases": [ 00:07:48.027 "c7ce3129-2616-414c-9083-eccc20681d3d" 00:07:48.027 ], 00:07:48.027 "product_name": "Malloc disk", 00:07:48.027 "block_size": 4096, 00:07:48.027 "num_blocks": 256, 00:07:48.027 "uuid": "c7ce3129-2616-414c-9083-eccc20681d3d", 00:07:48.027 "assigned_rate_limits": { 00:07:48.027 "rw_ios_per_sec": 0, 00:07:48.027 "rw_mbytes_per_sec": 0, 00:07:48.027 "r_mbytes_per_sec": 0, 00:07:48.027 "w_mbytes_per_sec": 0 00:07:48.027 }, 00:07:48.027 "claimed": false, 00:07:48.027 "zoned": false, 00:07:48.027 "supported_io_types": { 00:07:48.027 "read": true, 00:07:48.027 "write": true, 00:07:48.027 "unmap": true, 00:07:48.027 "flush": true, 00:07:48.027 "reset": true, 00:07:48.027 "nvme_admin": false, 00:07:48.027 "nvme_io": false, 00:07:48.027 "nvme_io_md": false, 00:07:48.027 "write_zeroes": true, 00:07:48.027 "zcopy": true, 00:07:48.027 "get_zone_info": false, 00:07:48.027 "zone_management": false, 00:07:48.027 "zone_append": false, 00:07:48.027 "compare": false, 00:07:48.027 "compare_and_write": false, 00:07:48.027 "abort": true, 00:07:48.027 "seek_hole": false, 00:07:48.027 "seek_data": false, 00:07:48.027 "copy": true, 00:07:48.027 "nvme_iov_md": false 00:07:48.027 }, 00:07:48.027 "memory_domains": [ 00:07:48.027 { 00:07:48.027 "dma_device_id": "system", 00:07:48.027 "dma_device_type": 1 00:07:48.027 }, 00:07:48.027 { 00:07:48.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.027 "dma_device_type": 2 00:07:48.027 } 00:07:48.027 ], 00:07:48.027 "driver_specific": {} 00:07:48.027 } 00:07:48.027 ]' 00:07:48.027 13:49:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:48.027 13:49:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:48.027 13:49:12 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:48.027 13:49:12 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.027 13:49:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:48.027 13:49:12 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.027 13:49:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:48.027 13:49:12 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.027 13:49:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:48.027 13:49:12 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.027 13:49:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:48.027 13:49:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:48.027 ************************************ 00:07:48.027 END TEST rpc_plugins 00:07:48.027 ************************************ 00:07:48.027 13:49:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:48.027 00:07:48.027 real 0m0.152s 00:07:48.027 user 0m0.098s 00:07:48.027 sys 0m0.021s 00:07:48.027 13:49:12 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.027 13:49:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:48.284 13:49:12 rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:48.284 13:49:12 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:48.284 13:49:12 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:48.284 13:49:12 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.284 13:49:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.284 ************************************ 00:07:48.284 START TEST rpc_trace_cmd_test 00:07:48.284 ************************************ 00:07:48.284 13:49:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:07:48.284 13:49:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:48.284 13:49:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:48.284 13:49:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.284 13:49:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.284 13:49:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.284 13:49:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:48.284 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid62442", 00:07:48.284 "tpoint_group_mask": "0x8", 00:07:48.284 "iscsi_conn": { 00:07:48.284 "mask": "0x2", 00:07:48.284 "tpoint_mask": "0x0" 00:07:48.284 }, 00:07:48.284 "scsi": { 00:07:48.284 "mask": "0x4", 00:07:48.284 "tpoint_mask": "0x0" 00:07:48.284 }, 00:07:48.284 "bdev": { 00:07:48.284 "mask": "0x8", 00:07:48.284 "tpoint_mask": "0xffffffffffffffff" 00:07:48.284 }, 00:07:48.284 "nvmf_rdma": { 00:07:48.284 "mask": "0x10", 00:07:48.284 "tpoint_mask": "0x0" 00:07:48.284 }, 00:07:48.284 "nvmf_tcp": { 00:07:48.284 "mask": "0x20", 00:07:48.284 "tpoint_mask": "0x0" 00:07:48.284 }, 00:07:48.284 "ftl": { 00:07:48.284 "mask": "0x40", 00:07:48.284 "tpoint_mask": "0x0" 00:07:48.285 }, 00:07:48.285 "blobfs": { 00:07:48.285 "mask": "0x80", 00:07:48.285 "tpoint_mask": "0x0" 00:07:48.285 }, 00:07:48.285 "dsa": { 00:07:48.285 "mask": "0x200", 00:07:48.285 "tpoint_mask": "0x0" 00:07:48.285 }, 00:07:48.285 "thread": { 00:07:48.285 "mask": "0x400", 00:07:48.285 "tpoint_mask": "0x0" 00:07:48.285 }, 00:07:48.285 "nvme_pcie": { 00:07:48.285 "mask": "0x800", 00:07:48.285 "tpoint_mask": "0x0" 00:07:48.285 }, 00:07:48.285 "iaa": { 00:07:48.285 "mask": "0x1000", 00:07:48.285 "tpoint_mask": "0x0" 00:07:48.285 }, 00:07:48.285 "nvme_tcp": { 00:07:48.285 "mask": "0x2000", 00:07:48.285 "tpoint_mask": "0x0" 00:07:48.285 }, 00:07:48.285 "bdev_nvme": { 00:07:48.285 "mask": "0x4000", 00:07:48.285 "tpoint_mask": "0x0" 00:07:48.285 }, 00:07:48.285 "sock": { 00:07:48.285 "mask": "0x8000", 00:07:48.285 "tpoint_mask": "0x0" 00:07:48.285 } 00:07:48.285 }' 00:07:48.285 13:49:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:48.285 13:49:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:07:48.285 13:49:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:48.285 13:49:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:48.285 13:49:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:48.285 13:49:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:48.285 13:49:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:48.543 13:49:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:48.543 13:49:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:48.543 ************************************ 00:07:48.543 END TEST rpc_trace_cmd_test 00:07:48.543 ************************************ 00:07:48.543 13:49:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:48.543 00:07:48.543 real 0m0.302s 00:07:48.543 user 0m0.261s 00:07:48.543 sys 0m0.025s 00:07:48.543 13:49:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.543 13:49:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.543 13:49:12 rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:48.543 13:49:12 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:48.543 13:49:12 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:48.543 13:49:12 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:48.543 13:49:12 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:48.543 13:49:12 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.543 13:49:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.543 ************************************ 00:07:48.543 START TEST rpc_daemon_integrity 00:07:48.543 ************************************ 00:07:48.543 13:49:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:07:48.543 13:49:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:48.543 13:49:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.543 13:49:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:48.543 13:49:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.543 13:49:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:48.543 13:49:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:48.543 13:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:48.543 13:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:48.543 13:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.543 13:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:48.543 13:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.543 13:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:48.543 13:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:48.543 13:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.543 13:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:48.543 13:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.543 13:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:48.543 { 00:07:48.543 "name": "Malloc2", 00:07:48.543 "aliases": [ 00:07:48.543 "adacaaef-d311-48c1-b3bb-51e3936e3854" 00:07:48.543 ], 00:07:48.543 "product_name": "Malloc disk", 00:07:48.543 "block_size": 512, 00:07:48.543 "num_blocks": 16384, 00:07:48.543 "uuid": "adacaaef-d311-48c1-b3bb-51e3936e3854", 00:07:48.543 "assigned_rate_limits": { 00:07:48.543 "rw_ios_per_sec": 0, 00:07:48.543 "rw_mbytes_per_sec": 0, 00:07:48.543 "r_mbytes_per_sec": 0, 00:07:48.543 "w_mbytes_per_sec": 0 00:07:48.543 }, 00:07:48.543 "claimed": false, 00:07:48.543 "zoned": false, 00:07:48.543 "supported_io_types": { 00:07:48.543 "read": true, 00:07:48.543 "write": true, 00:07:48.543 "unmap": true, 00:07:48.543 "flush": true, 00:07:48.543 "reset": true, 00:07:48.543 "nvme_admin": false, 00:07:48.543 "nvme_io": false, 00:07:48.543 "nvme_io_md": false, 00:07:48.543 "write_zeroes": true, 00:07:48.543 "zcopy": true, 00:07:48.543 "get_zone_info": false, 00:07:48.543 "zone_management": false, 00:07:48.543 "zone_append": false, 00:07:48.543 "compare": false, 00:07:48.543 "compare_and_write": false, 00:07:48.543 "abort": true, 00:07:48.543 "seek_hole": false, 00:07:48.543 "seek_data": false, 00:07:48.543 "copy": true, 00:07:48.543 "nvme_iov_md": false 00:07:48.543 }, 00:07:48.543 "memory_domains": [ 00:07:48.543 { 00:07:48.543 "dma_device_id": "system", 00:07:48.543 "dma_device_type": 1 00:07:48.543 }, 00:07:48.543 { 00:07:48.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.543 "dma_device_type": 2 00:07:48.543 } 00:07:48.543 ], 00:07:48.543 "driver_specific": {} 00:07:48.543 } 00:07:48.543 ]' 00:07:48.543 13:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:48.802 13:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:48.802 13:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:48.802 13:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.802 13:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:48.802 [2024-07-15 13:49:13.110326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:48.802 [2024-07-15 13:49:13.110399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.802 [2024-07-15 13:49:13.110435] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:07:48.802 [2024-07-15 13:49:13.110450] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.802 [2024-07-15 13:49:13.113030] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.802 [2024-07-15 13:49:13.113088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:48.802 Passthru0 00:07:48.802 13:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.802 13:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:48.802 13:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.802 13:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:48.802 13:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.802 13:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:48.802 { 00:07:48.802 "name": "Malloc2", 00:07:48.802 "aliases": [ 00:07:48.802 "adacaaef-d311-48c1-b3bb-51e3936e3854" 00:07:48.802 ], 00:07:48.802 "product_name": "Malloc disk", 00:07:48.802 "block_size": 512, 00:07:48.802 "num_blocks": 16384, 00:07:48.802 "uuid": "adacaaef-d311-48c1-b3bb-51e3936e3854", 00:07:48.802 "assigned_rate_limits": { 00:07:48.802 "rw_ios_per_sec": 0, 00:07:48.802 "rw_mbytes_per_sec": 0, 00:07:48.802 "r_mbytes_per_sec": 0, 00:07:48.802 "w_mbytes_per_sec": 0 00:07:48.802 }, 00:07:48.802 "claimed": true, 00:07:48.802 "claim_type": "exclusive_write", 00:07:48.802 "zoned": false, 00:07:48.802 "supported_io_types": { 00:07:48.802 "read": true, 00:07:48.802 "write": true, 00:07:48.802 "unmap": true, 00:07:48.802 "flush": true, 00:07:48.802 "reset": true, 00:07:48.802 "nvme_admin": false, 00:07:48.802 "nvme_io": false, 00:07:48.802 "nvme_io_md": false, 00:07:48.802 "write_zeroes": true, 00:07:48.802 "zcopy": true, 00:07:48.802 "get_zone_info": false, 00:07:48.802 "zone_management": false, 00:07:48.802 "zone_append": false, 00:07:48.802 "compare": false, 00:07:48.802 "compare_and_write": false, 00:07:48.802 "abort": true, 00:07:48.802 "seek_hole": false, 00:07:48.802 "seek_data": false, 00:07:48.802 "copy": true, 00:07:48.802 "nvme_iov_md": false 00:07:48.802 }, 00:07:48.802 "memory_domains": [ 00:07:48.802 { 00:07:48.802 "dma_device_id": "system", 00:07:48.802 "dma_device_type": 1 00:07:48.802 }, 00:07:48.802 { 00:07:48.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.802 "dma_device_type": 2 00:07:48.802 } 00:07:48.802 ], 00:07:48.802 "driver_specific": {} 00:07:48.802 }, 00:07:48.802 { 00:07:48.802 "name": "Passthru0", 00:07:48.802 "aliases": [ 00:07:48.802 "e9aca620-88d5-5abb-80a6-e84f276f2763" 00:07:48.802 ], 00:07:48.802 "product_name": "passthru", 00:07:48.802 "block_size": 512, 00:07:48.802 "num_blocks": 16384, 00:07:48.802 "uuid": "e9aca620-88d5-5abb-80a6-e84f276f2763", 00:07:48.802 "assigned_rate_limits": { 00:07:48.802 "rw_ios_per_sec": 0, 00:07:48.802 "rw_mbytes_per_sec": 0, 00:07:48.802 "r_mbytes_per_sec": 0, 00:07:48.802 "w_mbytes_per_sec": 0 00:07:48.802 }, 00:07:48.802 "claimed": false, 00:07:48.802 "zoned": false, 00:07:48.802 "supported_io_types": { 00:07:48.802 "read": true, 00:07:48.802 "write": true, 00:07:48.802 "unmap": true, 00:07:48.802 "flush": true, 00:07:48.802 "reset": true, 00:07:48.802 "nvme_admin": false, 00:07:48.802 "nvme_io": false, 00:07:48.802 "nvme_io_md": false, 00:07:48.802 "write_zeroes": true, 00:07:48.802 "zcopy": true, 00:07:48.802 "get_zone_info": false, 00:07:48.802 "zone_management": false, 00:07:48.802 "zone_append": false, 00:07:48.802 "compare": false, 00:07:48.802 "compare_and_write": false, 00:07:48.802 "abort": true, 00:07:48.802 "seek_hole": false, 00:07:48.802 "seek_data": false, 00:07:48.802 "copy": true, 00:07:48.802 "nvme_iov_md": false 00:07:48.802 }, 00:07:48.802 "memory_domains": [ 00:07:48.802 { 00:07:48.802 "dma_device_id": "system", 00:07:48.802 "dma_device_type": 1 00:07:48.802 }, 00:07:48.802 { 00:07:48.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.802 "dma_device_type": 2 00:07:48.802 } 00:07:48.802 ], 00:07:48.802 "driver_specific": { 00:07:48.802 "passthru": { 00:07:48.802 "name": "Passthru0", 00:07:48.802 "base_bdev_name": "Malloc2" 00:07:48.802 } 00:07:48.802 } 00:07:48.802 } 00:07:48.802 ]' 00:07:48.802 13:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:48.802 13:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:48.802 13:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:48.802 13:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.802 13:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:48.802 13:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.802 13:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:48.802 13:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.802 13:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:48.802 13:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.802 13:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:48.802 13:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.802 13:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:48.802 13:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.802 13:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:48.802 13:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:48.802 ************************************ 00:07:48.802 END TEST rpc_daemon_integrity 00:07:48.802 ************************************ 00:07:48.802 13:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:48.802 00:07:48.802 real 0m0.323s 00:07:48.802 user 0m0.196s 00:07:48.802 sys 0m0.036s 00:07:48.802 13:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.802 13:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:48.802 13:49:13 rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:48.802 13:49:13 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:48.802 13:49:13 rpc -- rpc/rpc.sh@84 -- # killprocess 62442 00:07:48.802 13:49:13 rpc -- common/autotest_common.sh@948 -- # '[' -z 62442 ']' 00:07:48.802 13:49:13 rpc -- common/autotest_common.sh@952 -- # kill -0 62442 00:07:48.802 13:49:13 rpc -- common/autotest_common.sh@953 -- # uname 00:07:48.802 13:49:13 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:48.802 13:49:13 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62442 00:07:49.060 killing process with pid 62442 00:07:49.060 13:49:13 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:49.060 13:49:13 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:49.060 13:49:13 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62442' 00:07:49.060 13:49:13 rpc -- common/autotest_common.sh@967 -- # kill 62442 00:07:49.060 13:49:13 rpc -- common/autotest_common.sh@972 -- # wait 62442 00:07:50.972 00:07:50.972 real 0m4.753s 00:07:50.972 user 0m5.560s 00:07:50.972 sys 0m0.700s 00:07:50.972 13:49:15 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.972 ************************************ 00:07:50.972 END TEST rpc 00:07:50.972 ************************************ 00:07:50.972 13:49:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.972 13:49:15 -- common/autotest_common.sh@1142 -- # return 0 00:07:50.972 13:49:15 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:50.972 13:49:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:50.972 13:49:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.972 13:49:15 -- common/autotest_common.sh@10 -- # set +x 00:07:51.260 ************************************ 00:07:51.260 START TEST skip_rpc 00:07:51.260 ************************************ 00:07:51.260 13:49:15 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:51.260 * Looking for test storage... 00:07:51.260 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:51.260 13:49:15 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:51.260 13:49:15 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:51.260 13:49:15 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:51.260 13:49:15 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:51.260 13:49:15 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.260 13:49:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.260 ************************************ 00:07:51.260 START TEST skip_rpc 00:07:51.260 ************************************ 00:07:51.260 13:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:07:51.260 13:49:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=62658 00:07:51.260 13:49:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:51.260 13:49:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:51.260 13:49:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:51.260 [2024-07-15 13:49:15.702938] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:51.260 [2024-07-15 13:49:15.703105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62658 ] 00:07:51.528 [2024-07-15 13:49:15.865905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.528 [2024-07-15 13:49:16.056962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.792 13:49:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:56.792 13:49:20 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:56.792 13:49:20 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:56.792 13:49:20 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:56.792 13:49:20 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:56.792 13:49:20 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:56.792 13:49:20 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:56.792 13:49:20 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:07:56.792 13:49:20 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.792 13:49:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.792 13:49:20 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:56.792 13:49:20 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:56.792 13:49:20 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:56.792 13:49:20 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:56.792 13:49:20 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:56.792 13:49:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:56.792 13:49:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 62658 00:07:56.792 13:49:20 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 62658 ']' 00:07:56.792 13:49:20 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 62658 00:07:56.792 13:49:20 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:07:56.792 13:49:20 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:56.792 13:49:20 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62658 00:07:56.792 killing process with pid 62658 00:07:56.793 13:49:20 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:56.793 13:49:20 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:56.793 13:49:20 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62658' 00:07:56.793 13:49:20 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 62658 00:07:56.793 13:49:20 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 62658 00:07:58.694 ************************************ 00:07:58.694 END TEST skip_rpc 00:07:58.694 00:07:58.694 real 0m7.169s 00:07:58.694 user 0m6.726s 00:07:58.694 sys 0m0.318s 00:07:58.694 13:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.694 13:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.694 ************************************ 00:07:58.694 13:49:22 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:58.694 13:49:22 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:58.694 13:49:22 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:58.694 13:49:22 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.694 13:49:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.694 ************************************ 00:07:58.694 START TEST skip_rpc_with_json 00:07:58.694 ************************************ 00:07:58.694 13:49:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:07:58.694 13:49:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:58.694 13:49:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=62764 00:07:58.694 13:49:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:58.694 13:49:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 62764 00:07:58.694 13:49:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 62764 ']' 00:07:58.694 13:49:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.694 13:49:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:58.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.694 13:49:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:58.694 13:49:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.694 13:49:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:58.694 13:49:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:58.694 [2024-07-15 13:49:22.936722] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:58.694 [2024-07-15 13:49:22.936935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62764 ] 00:07:58.695 [2024-07-15 13:49:23.113452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.952 [2024-07-15 13:49:23.353757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.885 13:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:59.885 13:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:07:59.885 13:49:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:59.885 13:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.885 13:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:59.885 [2024-07-15 13:49:24.224682] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:59.886 request: 00:07:59.886 { 00:07:59.886 "trtype": "tcp", 00:07:59.886 "method": "nvmf_get_transports", 00:07:59.886 "req_id": 1 00:07:59.886 } 00:07:59.886 Got JSON-RPC error response 00:07:59.886 response: 00:07:59.886 { 00:07:59.886 "code": -19, 00:07:59.886 "message": "No such device" 00:07:59.886 } 00:07:59.886 13:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:59.886 13:49:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:59.886 13:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.886 13:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:59.886 [2024-07-15 13:49:24.236791] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.886 13:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.886 13:49:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:59.886 13:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.886 13:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:59.886 13:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.886 13:49:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:59.886 { 00:07:59.886 "subsystems": [ 00:07:59.886 { 00:07:59.886 "subsystem": "keyring", 00:07:59.886 "config": [] 00:07:59.886 }, 00:07:59.886 { 00:07:59.886 "subsystem": "iobuf", 00:07:59.886 "config": [ 00:07:59.886 { 00:07:59.886 "method": "iobuf_set_options", 00:07:59.886 "params": { 00:07:59.886 "small_pool_count": 8192, 00:07:59.886 "large_pool_count": 1024, 00:07:59.886 "small_bufsize": 8192, 00:07:59.886 "large_bufsize": 135168 00:07:59.886 } 00:07:59.886 } 00:07:59.886 ] 00:07:59.886 }, 00:07:59.886 { 00:07:59.886 "subsystem": "sock", 00:07:59.886 "config": [ 00:07:59.886 { 00:07:59.886 "method": "sock_set_default_impl", 00:07:59.886 "params": { 00:07:59.886 "impl_name": "posix" 00:07:59.886 } 00:07:59.886 }, 00:07:59.886 { 00:07:59.886 "method": "sock_impl_set_options", 00:07:59.886 "params": { 00:07:59.886 "impl_name": "ssl", 00:07:59.886 "recv_buf_size": 4096, 00:07:59.886 "send_buf_size": 4096, 00:07:59.886 "enable_recv_pipe": true, 00:07:59.886 "enable_quickack": false, 00:07:59.886 "enable_placement_id": 0, 00:07:59.886 "enable_zerocopy_send_server": true, 00:07:59.886 "enable_zerocopy_send_client": false, 00:07:59.886 "zerocopy_threshold": 0, 00:07:59.886 "tls_version": 0, 00:07:59.886 "enable_ktls": false 00:07:59.886 } 00:07:59.886 }, 00:07:59.886 { 00:07:59.886 "method": "sock_impl_set_options", 00:07:59.886 "params": { 00:07:59.886 "impl_name": "posix", 00:07:59.886 "recv_buf_size": 2097152, 00:07:59.886 "send_buf_size": 2097152, 00:07:59.886 "enable_recv_pipe": true, 00:07:59.886 "enable_quickack": false, 00:07:59.886 "enable_placement_id": 0, 00:07:59.886 "enable_zerocopy_send_server": true, 00:07:59.886 "enable_zerocopy_send_client": false, 00:07:59.886 "zerocopy_threshold": 0, 00:07:59.886 "tls_version": 0, 00:07:59.886 "enable_ktls": false 00:07:59.886 } 00:07:59.886 } 00:07:59.886 ] 00:07:59.886 }, 00:07:59.886 { 00:07:59.886 "subsystem": "vmd", 00:07:59.886 "config": [] 00:07:59.886 }, 00:07:59.886 { 00:07:59.886 "subsystem": "accel", 00:07:59.886 "config": [ 00:07:59.886 { 00:07:59.886 "method": "accel_set_options", 00:07:59.886 "params": { 00:07:59.886 "small_cache_size": 128, 00:07:59.886 "large_cache_size": 16, 00:07:59.886 "task_count": 2048, 00:07:59.886 "sequence_count": 2048, 00:07:59.886 "buf_count": 2048 00:07:59.886 } 00:07:59.886 } 00:07:59.886 ] 00:07:59.886 }, 00:07:59.886 { 00:07:59.886 "subsystem": "bdev", 00:07:59.886 "config": [ 00:07:59.886 { 00:07:59.886 "method": "bdev_set_options", 00:07:59.886 "params": { 00:07:59.886 "bdev_io_pool_size": 65535, 00:07:59.886 "bdev_io_cache_size": 256, 00:07:59.886 "bdev_auto_examine": true, 00:07:59.886 "iobuf_small_cache_size": 128, 00:07:59.886 "iobuf_large_cache_size": 16 00:07:59.886 } 00:07:59.886 }, 00:07:59.886 { 00:07:59.886 "method": "bdev_raid_set_options", 00:07:59.886 "params": { 00:07:59.886 "process_window_size_kb": 1024 00:07:59.886 } 00:07:59.886 }, 00:07:59.886 { 00:07:59.886 "method": "bdev_iscsi_set_options", 00:07:59.886 "params": { 00:07:59.886 "timeout_sec": 30 00:07:59.886 } 00:07:59.886 }, 00:07:59.886 { 00:07:59.886 "method": "bdev_nvme_set_options", 00:07:59.886 "params": { 00:07:59.886 "action_on_timeout": "none", 00:07:59.886 "timeout_us": 0, 00:07:59.886 "timeout_admin_us": 0, 00:07:59.886 "keep_alive_timeout_ms": 10000, 00:07:59.886 "arbitration_burst": 0, 00:07:59.886 "low_priority_weight": 0, 00:07:59.886 "medium_priority_weight": 0, 00:07:59.886 "high_priority_weight": 0, 00:07:59.886 "nvme_adminq_poll_period_us": 10000, 00:07:59.886 "nvme_ioq_poll_period_us": 0, 00:07:59.886 "io_queue_requests": 0, 00:07:59.886 "delay_cmd_submit": true, 00:07:59.886 "transport_retry_count": 4, 00:07:59.886 "bdev_retry_count": 3, 00:07:59.886 "transport_ack_timeout": 0, 00:07:59.886 "ctrlr_loss_timeout_sec": 0, 00:07:59.886 "reconnect_delay_sec": 0, 00:07:59.886 "fast_io_fail_timeout_sec": 0, 00:07:59.886 "disable_auto_failback": false, 00:07:59.886 "generate_uuids": false, 00:07:59.886 "transport_tos": 0, 00:07:59.886 "nvme_error_stat": false, 00:07:59.886 "rdma_srq_size": 0, 00:07:59.886 "io_path_stat": false, 00:07:59.886 "allow_accel_sequence": false, 00:07:59.886 "rdma_max_cq_size": 0, 00:07:59.886 "rdma_cm_event_timeout_ms": 0, 00:07:59.886 "dhchap_digests": [ 00:07:59.886 "sha256", 00:07:59.886 "sha384", 00:07:59.886 "sha512" 00:07:59.886 ], 00:07:59.886 "dhchap_dhgroups": [ 00:07:59.886 "null", 00:07:59.886 "ffdhe2048", 00:07:59.886 "ffdhe3072", 00:07:59.886 "ffdhe4096", 00:07:59.886 "ffdhe6144", 00:07:59.886 "ffdhe8192" 00:07:59.886 ] 00:07:59.886 } 00:07:59.886 }, 00:07:59.886 { 00:07:59.886 "method": "bdev_nvme_set_hotplug", 00:07:59.886 "params": { 00:07:59.886 "period_us": 100000, 00:07:59.886 "enable": false 00:07:59.886 } 00:07:59.886 }, 00:07:59.886 { 00:07:59.886 "method": "bdev_wait_for_examine" 00:07:59.886 } 00:07:59.886 ] 00:07:59.886 }, 00:07:59.886 { 00:07:59.886 "subsystem": "scsi", 00:07:59.886 "config": null 00:07:59.886 }, 00:07:59.886 { 00:07:59.886 "subsystem": "scheduler", 00:07:59.886 "config": [ 00:07:59.886 { 00:07:59.886 "method": "framework_set_scheduler", 00:07:59.886 "params": { 00:07:59.886 "name": "static" 00:07:59.886 } 00:07:59.886 } 00:07:59.886 ] 00:07:59.886 }, 00:07:59.886 { 00:07:59.886 "subsystem": "vhost_scsi", 00:07:59.886 "config": [] 00:07:59.886 }, 00:07:59.886 { 00:07:59.886 "subsystem": "vhost_blk", 00:07:59.886 "config": [] 00:07:59.886 }, 00:07:59.886 { 00:07:59.886 "subsystem": "ublk", 00:07:59.886 "config": [] 00:07:59.886 }, 00:07:59.886 { 00:07:59.886 "subsystem": "nbd", 00:07:59.886 "config": [] 00:07:59.886 }, 00:07:59.886 { 00:07:59.886 "subsystem": "nvmf", 00:07:59.886 "config": [ 00:07:59.886 { 00:07:59.886 "method": "nvmf_set_config", 00:07:59.886 "params": { 00:07:59.886 "discovery_filter": "match_any", 00:07:59.886 "admin_cmd_passthru": { 00:07:59.886 "identify_ctrlr": false 00:07:59.886 } 00:07:59.886 } 00:07:59.886 }, 00:07:59.886 { 00:07:59.886 "method": "nvmf_set_max_subsystems", 00:07:59.886 "params": { 00:07:59.886 "max_subsystems": 1024 00:07:59.886 } 00:07:59.886 }, 00:07:59.886 { 00:07:59.886 "method": "nvmf_set_crdt", 00:07:59.886 "params": { 00:07:59.886 "crdt1": 0, 00:07:59.886 "crdt2": 0, 00:07:59.886 "crdt3": 0 00:07:59.886 } 00:07:59.886 }, 00:07:59.886 { 00:07:59.886 "method": "nvmf_create_transport", 00:07:59.886 "params": { 00:07:59.886 "trtype": "TCP", 00:07:59.886 "max_queue_depth": 128, 00:07:59.886 "max_io_qpairs_per_ctrlr": 127, 00:07:59.886 "in_capsule_data_size": 4096, 00:07:59.886 "max_io_size": 131072, 00:07:59.886 "io_unit_size": 131072, 00:07:59.886 "max_aq_depth": 128, 00:07:59.886 "num_shared_buffers": 511, 00:07:59.886 "buf_cache_size": 4294967295, 00:07:59.886 "dif_insert_or_strip": false, 00:07:59.886 "zcopy": false, 00:07:59.886 "c2h_success": true, 00:07:59.886 "sock_priority": 0, 00:07:59.886 "abort_timeout_sec": 1, 00:07:59.886 "ack_timeout": 0, 00:07:59.886 "data_wr_pool_size": 0 00:07:59.886 } 00:07:59.886 } 00:07:59.886 ] 00:07:59.886 }, 00:07:59.887 { 00:07:59.887 "subsystem": "iscsi", 00:07:59.887 "config": [ 00:07:59.887 { 00:07:59.887 "method": "iscsi_set_options", 00:07:59.887 "params": { 00:07:59.887 "node_base": "iqn.2016-06.io.spdk", 00:07:59.887 "max_sessions": 128, 00:07:59.887 "max_connections_per_session": 2, 00:07:59.887 "max_queue_depth": 64, 00:07:59.887 "default_time2wait": 2, 00:07:59.887 "default_time2retain": 20, 00:07:59.887 "first_burst_length": 8192, 00:07:59.887 "immediate_data": true, 00:07:59.887 "allow_duplicated_isid": false, 00:07:59.887 "error_recovery_level": 0, 00:07:59.887 "nop_timeout": 60, 00:07:59.887 "nop_in_interval": 30, 00:07:59.887 "disable_chap": false, 00:07:59.887 "require_chap": false, 00:07:59.887 "mutual_chap": false, 00:07:59.887 "chap_group": 0, 00:07:59.887 "max_large_datain_per_connection": 64, 00:07:59.887 "max_r2t_per_connection": 4, 00:07:59.887 "pdu_pool_size": 36864, 00:07:59.887 "immediate_data_pool_size": 16384, 00:07:59.887 "data_out_pool_size": 2048 00:07:59.887 } 00:07:59.887 } 00:07:59.887 ] 00:07:59.887 } 00:07:59.887 ] 00:07:59.887 } 00:07:59.887 13:49:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:59.887 13:49:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 62764 00:07:59.887 13:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 62764 ']' 00:07:59.887 13:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 62764 00:07:59.887 13:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:07:59.887 13:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:59.887 13:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62764 00:08:00.144 killing process with pid 62764 00:08:00.144 13:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:00.144 13:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:00.144 13:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62764' 00:08:00.144 13:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 62764 00:08:00.144 13:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 62764 00:08:02.679 13:49:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=62820 00:08:02.679 13:49:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:02.679 13:49:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:07.979 13:49:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 62820 00:08:07.979 13:49:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 62820 ']' 00:08:07.979 13:49:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 62820 00:08:07.979 13:49:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:08:07.979 13:49:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:07.979 13:49:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62820 00:08:07.979 killing process with pid 62820 00:08:07.979 13:49:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:07.979 13:49:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:07.979 13:49:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62820' 00:08:07.979 13:49:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 62820 00:08:07.979 13:49:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 62820 00:08:09.353 13:49:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:09.353 13:49:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:09.353 00:08:09.353 real 0m10.930s 00:08:09.353 user 0m10.620s 00:08:09.353 sys 0m0.749s 00:08:09.353 13:49:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.353 ************************************ 00:08:09.353 END TEST skip_rpc_with_json 00:08:09.353 ************************************ 00:08:09.353 13:49:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:09.353 13:49:33 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:09.353 13:49:33 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:09.353 13:49:33 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:09.353 13:49:33 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.353 13:49:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.353 ************************************ 00:08:09.353 START TEST skip_rpc_with_delay 00:08:09.353 ************************************ 00:08:09.354 13:49:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:08:09.354 13:49:33 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:09.354 13:49:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:08:09.354 13:49:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:09.354 13:49:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:09.354 13:49:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:09.354 13:49:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:09.354 13:49:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:09.354 13:49:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:09.354 13:49:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:09.354 13:49:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:09.354 13:49:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:09.354 13:49:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:09.612 [2024-07-15 13:49:33.938135] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:09.612 [2024-07-15 13:49:33.938348] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:08:09.612 13:49:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:08:09.612 13:49:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:09.612 13:49:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:09.612 13:49:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:09.612 00:08:09.612 real 0m0.209s 00:08:09.612 user 0m0.111s 00:08:09.612 sys 0m0.095s 00:08:09.612 13:49:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.612 13:49:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:09.612 ************************************ 00:08:09.612 END TEST skip_rpc_with_delay 00:08:09.612 ************************************ 00:08:09.612 13:49:34 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:09.612 13:49:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:09.612 13:49:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:09.612 13:49:34 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:09.612 13:49:34 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:09.612 13:49:34 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.612 13:49:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.612 ************************************ 00:08:09.612 START TEST exit_on_failed_rpc_init 00:08:09.612 ************************************ 00:08:09.612 13:49:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:08:09.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.612 13:49:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=62949 00:08:09.612 13:49:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:09.612 13:49:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 62949 00:08:09.612 13:49:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 62949 ']' 00:08:09.612 13:49:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.612 13:49:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:09.612 13:49:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.612 13:49:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:09.612 13:49:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:09.872 [2024-07-15 13:49:34.171462] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:09.872 [2024-07-15 13:49:34.171645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62949 ] 00:08:09.872 [2024-07-15 13:49:34.372238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.131 [2024-07-15 13:49:34.578419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.065 13:49:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:11.065 13:49:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:08:11.065 13:49:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:11.065 13:49:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:11.065 13:49:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:08:11.065 13:49:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:11.065 13:49:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:11.065 13:49:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.065 13:49:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:11.065 13:49:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.065 13:49:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:11.065 13:49:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.065 13:49:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:11.065 13:49:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:11.065 13:49:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:11.065 [2024-07-15 13:49:35.402479] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:11.065 [2024-07-15 13:49:35.402721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62967 ] 00:08:11.065 [2024-07-15 13:49:35.577261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.324 [2024-07-15 13:49:35.788834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.324 [2024-07-15 13:49:35.788954] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:11.324 [2024-07-15 13:49:35.788979] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:11.324 [2024-07-15 13:49:35.788997] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:11.891 13:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:08:11.891 13:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:11.891 13:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:08:11.891 13:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:08:11.891 13:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:08:11.891 13:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:11.891 13:49:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:11.891 13:49:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 62949 00:08:11.891 13:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 62949 ']' 00:08:11.891 13:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 62949 00:08:11.891 13:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:08:11.891 13:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:11.891 13:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62949 00:08:11.891 killing process with pid 62949 00:08:11.891 13:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:11.891 13:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:11.891 13:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62949' 00:08:11.891 13:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 62949 00:08:11.891 13:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 62949 00:08:14.420 ************************************ 00:08:14.420 END TEST exit_on_failed_rpc_init 00:08:14.420 ************************************ 00:08:14.420 00:08:14.420 real 0m4.323s 00:08:14.420 user 0m5.064s 00:08:14.420 sys 0m0.511s 00:08:14.420 13:49:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.420 13:49:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:14.420 13:49:38 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:14.420 13:49:38 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:14.420 ************************************ 00:08:14.420 END TEST skip_rpc 00:08:14.420 ************************************ 00:08:14.420 00:08:14.420 real 0m22.899s 00:08:14.420 user 0m22.613s 00:08:14.420 sys 0m1.840s 00:08:14.420 13:49:38 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.420 13:49:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.420 13:49:38 -- common/autotest_common.sh@1142 -- # return 0 00:08:14.420 13:49:38 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:14.420 13:49:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:14.420 13:49:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.420 13:49:38 -- common/autotest_common.sh@10 -- # set +x 00:08:14.420 ************************************ 00:08:14.421 START TEST rpc_client 00:08:14.421 ************************************ 00:08:14.421 13:49:38 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:14.421 * Looking for test storage... 00:08:14.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:08:14.421 13:49:38 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:08:14.421 OK 00:08:14.421 13:49:38 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:14.421 00:08:14.421 real 0m0.133s 00:08:14.421 user 0m0.063s 00:08:14.421 sys 0m0.075s 00:08:14.421 13:49:38 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.421 13:49:38 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:14.421 ************************************ 00:08:14.421 END TEST rpc_client 00:08:14.421 ************************************ 00:08:14.421 13:49:38 -- common/autotest_common.sh@1142 -- # return 0 00:08:14.421 13:49:38 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:14.421 13:49:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:14.421 13:49:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.421 13:49:38 -- common/autotest_common.sh@10 -- # set +x 00:08:14.421 ************************************ 00:08:14.421 START TEST json_config 00:08:14.421 ************************************ 00:08:14.421 13:49:38 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:14.421 13:49:38 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:14.421 13:49:38 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:14.421 13:49:38 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.421 13:49:38 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.421 13:49:38 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.421 13:49:38 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.421 13:49:38 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.421 13:49:38 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.421 13:49:38 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.421 13:49:38 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.421 13:49:38 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.421 13:49:38 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.421 13:49:38 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb411604-4b2f-465f-8445-56ea1ec33608 00:08:14.421 13:49:38 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=eb411604-4b2f-465f-8445-56ea1ec33608 00:08:14.421 13:49:38 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.421 13:49:38 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.421 13:49:38 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:14.421 13:49:38 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.421 13:49:38 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:14.421 13:49:38 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.421 13:49:38 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.421 13:49:38 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.421 13:49:38 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.421 13:49:38 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.421 13:49:38 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.421 13:49:38 json_config -- paths/export.sh@5 -- # export PATH 00:08:14.421 13:49:38 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.421 13:49:38 json_config -- nvmf/common.sh@47 -- # : 0 00:08:14.421 13:49:38 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:14.421 13:49:38 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:14.421 13:49:38 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.421 13:49:38 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.421 13:49:38 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.421 13:49:38 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:14.421 13:49:38 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:14.421 13:49:38 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:14.421 13:49:38 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:14.421 13:49:38 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:14.421 13:49:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:14.421 13:49:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:14.421 13:49:38 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:14.421 WARNING: No tests are enabled so not running JSON configuration tests 00:08:14.421 13:49:38 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:08:14.421 13:49:38 json_config -- json_config/json_config.sh@28 -- # exit 0 00:08:14.421 00:08:14.421 real 0m0.075s 00:08:14.421 user 0m0.037s 00:08:14.421 sys 0m0.038s 00:08:14.421 13:49:38 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.421 13:49:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:14.421 ************************************ 00:08:14.421 END TEST json_config 00:08:14.421 ************************************ 00:08:14.421 13:49:38 -- common/autotest_common.sh@1142 -- # return 0 00:08:14.421 13:49:38 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:14.421 13:49:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:14.421 13:49:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.421 13:49:38 -- common/autotest_common.sh@10 -- # set +x 00:08:14.421 ************************************ 00:08:14.421 START TEST json_config_extra_key 00:08:14.421 ************************************ 00:08:14.421 13:49:38 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:14.421 13:49:38 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:14.421 13:49:38 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:14.421 13:49:38 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.421 13:49:38 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.421 13:49:38 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.421 13:49:38 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.421 13:49:38 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.421 13:49:38 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.421 13:49:38 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.421 13:49:38 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.421 13:49:38 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.421 13:49:38 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.421 13:49:38 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb411604-4b2f-465f-8445-56ea1ec33608 00:08:14.421 13:49:38 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=eb411604-4b2f-465f-8445-56ea1ec33608 00:08:14.421 13:49:38 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.421 13:49:38 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.421 13:49:38 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:14.421 13:49:38 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.421 13:49:38 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:14.421 13:49:38 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.421 13:49:38 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.421 13:49:38 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.421 13:49:38 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.421 13:49:38 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.421 13:49:38 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.421 13:49:38 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:14.421 13:49:38 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.421 13:49:38 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:08:14.421 13:49:38 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:14.421 13:49:38 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:14.421 13:49:38 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.421 13:49:38 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.421 13:49:38 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.421 13:49:38 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:14.421 13:49:38 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:14.421 13:49:38 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:14.422 13:49:38 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:14.422 13:49:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:14.422 13:49:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:14.422 13:49:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:14.422 13:49:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:14.422 13:49:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:14.422 13:49:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:14.422 13:49:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:08:14.422 13:49:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:14.422 13:49:38 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:14.422 13:49:38 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:14.422 INFO: launching applications... 00:08:14.422 13:49:38 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:14.422 13:49:38 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:14.422 13:49:38 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:14.422 13:49:38 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:14.422 13:49:38 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:14.422 13:49:38 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:14.422 13:49:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:14.422 13:49:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:14.422 13:49:38 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=63155 00:08:14.422 13:49:38 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:14.422 Waiting for target to run... 00:08:14.422 13:49:38 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:14.422 13:49:38 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 63155 /var/tmp/spdk_tgt.sock 00:08:14.422 13:49:38 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 63155 ']' 00:08:14.422 13:49:38 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:14.422 13:49:38 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:14.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:14.422 13:49:38 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:14.422 13:49:38 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:14.422 13:49:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:14.422 [2024-07-15 13:49:38.948871] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:14.422 [2024-07-15 13:49:38.949087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63155 ] 00:08:14.988 [2024-07-15 13:49:39.300489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.988 [2024-07-15 13:49:39.484684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.922 13:49:40 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:15.922 00:08:15.922 13:49:40 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:08:15.922 13:49:40 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:15.922 INFO: shutting down applications... 00:08:15.922 13:49:40 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:15.922 13:49:40 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:15.922 13:49:40 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:15.922 13:49:40 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:15.922 13:49:40 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 63155 ]] 00:08:15.922 13:49:40 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 63155 00:08:15.922 13:49:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:15.922 13:49:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:15.922 13:49:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63155 00:08:15.922 13:49:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:16.182 13:49:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:16.182 13:49:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:16.182 13:49:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63155 00:08:16.182 13:49:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:16.876 13:49:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:16.876 13:49:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:16.876 13:49:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63155 00:08:16.876 13:49:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:17.134 13:49:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:17.134 13:49:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:17.134 13:49:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63155 00:08:17.134 13:49:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:17.702 13:49:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:17.702 13:49:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:17.702 13:49:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63155 00:08:17.702 13:49:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:18.270 13:49:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:18.270 13:49:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:18.270 13:49:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63155 00:08:18.270 13:49:42 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:18.270 13:49:42 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:18.270 13:49:42 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:18.270 SPDK target shutdown done 00:08:18.270 13:49:42 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:18.270 Success 00:08:18.270 13:49:42 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:18.270 00:08:18.270 real 0m3.868s 00:08:18.270 user 0m3.775s 00:08:18.270 sys 0m0.477s 00:08:18.270 13:49:42 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.270 13:49:42 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:18.270 ************************************ 00:08:18.270 END TEST json_config_extra_key 00:08:18.270 ************************************ 00:08:18.270 13:49:42 -- common/autotest_common.sh@1142 -- # return 0 00:08:18.270 13:49:42 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:18.270 13:49:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:18.270 13:49:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.270 13:49:42 -- common/autotest_common.sh@10 -- # set +x 00:08:18.270 ************************************ 00:08:18.270 START TEST alias_rpc 00:08:18.270 ************************************ 00:08:18.270 13:49:42 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:18.270 * Looking for test storage... 00:08:18.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:18.270 13:49:42 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:18.270 13:49:42 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=63247 00:08:18.271 13:49:42 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 63247 00:08:18.271 13:49:42 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:18.271 13:49:42 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 63247 ']' 00:08:18.271 13:49:42 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.271 13:49:42 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:18.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.271 13:49:42 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.271 13:49:42 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:18.271 13:49:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.529 [2024-07-15 13:49:42.886524] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:18.529 [2024-07-15 13:49:42.886715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63247 ] 00:08:18.529 [2024-07-15 13:49:43.059729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.806 [2024-07-15 13:49:43.253699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.742 13:49:43 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:19.742 13:49:43 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:19.742 13:49:43 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:20.001 13:49:44 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 63247 00:08:20.001 13:49:44 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 63247 ']' 00:08:20.001 13:49:44 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 63247 00:08:20.001 13:49:44 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:08:20.001 13:49:44 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:20.001 13:49:44 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63247 00:08:20.001 13:49:44 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:20.001 13:49:44 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:20.001 killing process with pid 63247 00:08:20.001 13:49:44 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63247' 00:08:20.001 13:49:44 alias_rpc -- common/autotest_common.sh@967 -- # kill 63247 00:08:20.001 13:49:44 alias_rpc -- common/autotest_common.sh@972 -- # wait 63247 00:08:21.968 00:08:21.968 real 0m3.810s 00:08:21.968 user 0m3.996s 00:08:21.968 sys 0m0.492s 00:08:21.968 13:49:46 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.968 13:49:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.968 ************************************ 00:08:21.968 END TEST alias_rpc 00:08:21.968 ************************************ 00:08:22.226 13:49:46 -- common/autotest_common.sh@1142 -- # return 0 00:08:22.226 13:49:46 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:08:22.226 13:49:46 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:22.226 13:49:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:22.226 13:49:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.226 13:49:46 -- common/autotest_common.sh@10 -- # set +x 00:08:22.226 ************************************ 00:08:22.226 START TEST spdkcli_tcp 00:08:22.226 ************************************ 00:08:22.226 13:49:46 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:22.226 * Looking for test storage... 00:08:22.226 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:22.226 13:49:46 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:22.226 13:49:46 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:22.226 13:49:46 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:22.226 13:49:46 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:22.226 13:49:46 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:22.226 13:49:46 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:22.226 13:49:46 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:22.226 13:49:46 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:22.226 13:49:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:22.226 13:49:46 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=63340 00:08:22.226 13:49:46 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 63340 00:08:22.226 13:49:46 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:22.227 13:49:46 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 63340 ']' 00:08:22.227 13:49:46 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.227 13:49:46 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:22.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.227 13:49:46 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.227 13:49:46 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:22.227 13:49:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:22.227 [2024-07-15 13:49:46.744099] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:22.227 [2024-07-15 13:49:46.744339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63340 ] 00:08:22.485 [2024-07-15 13:49:46.920077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:22.742 [2024-07-15 13:49:47.162840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.742 [2024-07-15 13:49:47.162846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.677 13:49:47 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:23.677 13:49:47 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:08:23.677 13:49:47 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=63363 00:08:23.677 13:49:47 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:23.677 13:49:47 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:23.936 [ 00:08:23.936 "bdev_malloc_delete", 00:08:23.936 "bdev_malloc_create", 00:08:23.936 "bdev_null_resize", 00:08:23.936 "bdev_null_delete", 00:08:23.936 "bdev_null_create", 00:08:23.936 "bdev_nvme_cuse_unregister", 00:08:23.936 "bdev_nvme_cuse_register", 00:08:23.936 "bdev_opal_new_user", 00:08:23.936 "bdev_opal_set_lock_state", 00:08:23.936 "bdev_opal_delete", 00:08:23.936 "bdev_opal_get_info", 00:08:23.936 "bdev_opal_create", 00:08:23.936 "bdev_nvme_opal_revert", 00:08:23.936 "bdev_nvme_opal_init", 00:08:23.936 "bdev_nvme_send_cmd", 00:08:23.936 "bdev_nvme_get_path_iostat", 00:08:23.936 "bdev_nvme_get_mdns_discovery_info", 00:08:23.936 "bdev_nvme_stop_mdns_discovery", 00:08:23.936 "bdev_nvme_start_mdns_discovery", 00:08:23.936 "bdev_nvme_set_multipath_policy", 00:08:23.936 "bdev_nvme_set_preferred_path", 00:08:23.936 "bdev_nvme_get_io_paths", 00:08:23.936 "bdev_nvme_remove_error_injection", 00:08:23.936 "bdev_nvme_add_error_injection", 00:08:23.936 "bdev_nvme_get_discovery_info", 00:08:23.936 "bdev_nvme_stop_discovery", 00:08:23.936 "bdev_nvme_start_discovery", 00:08:23.936 "bdev_nvme_get_controller_health_info", 00:08:23.936 "bdev_nvme_disable_controller", 00:08:23.936 "bdev_nvme_enable_controller", 00:08:23.936 "bdev_nvme_reset_controller", 00:08:23.936 "bdev_nvme_get_transport_statistics", 00:08:23.936 "bdev_nvme_apply_firmware", 00:08:23.936 "bdev_nvme_detach_controller", 00:08:23.937 "bdev_nvme_get_controllers", 00:08:23.937 "bdev_nvme_attach_controller", 00:08:23.937 "bdev_nvme_set_hotplug", 00:08:23.937 "bdev_nvme_set_options", 00:08:23.937 "bdev_passthru_delete", 00:08:23.937 "bdev_passthru_create", 00:08:23.937 "bdev_lvol_set_parent_bdev", 00:08:23.937 "bdev_lvol_set_parent", 00:08:23.937 "bdev_lvol_check_shallow_copy", 00:08:23.937 "bdev_lvol_start_shallow_copy", 00:08:23.937 "bdev_lvol_grow_lvstore", 00:08:23.937 "bdev_lvol_get_lvols", 00:08:23.937 "bdev_lvol_get_lvstores", 00:08:23.937 "bdev_lvol_delete", 00:08:23.937 "bdev_lvol_set_read_only", 00:08:23.937 "bdev_lvol_resize", 00:08:23.937 "bdev_lvol_decouple_parent", 00:08:23.937 "bdev_lvol_inflate", 00:08:23.937 "bdev_lvol_rename", 00:08:23.937 "bdev_lvol_clone_bdev", 00:08:23.937 "bdev_lvol_clone", 00:08:23.937 "bdev_lvol_snapshot", 00:08:23.937 "bdev_lvol_create", 00:08:23.937 "bdev_lvol_delete_lvstore", 00:08:23.937 "bdev_lvol_rename_lvstore", 00:08:23.937 "bdev_lvol_create_lvstore", 00:08:23.937 "bdev_raid_set_options", 00:08:23.937 "bdev_raid_remove_base_bdev", 00:08:23.937 "bdev_raid_add_base_bdev", 00:08:23.937 "bdev_raid_delete", 00:08:23.937 "bdev_raid_create", 00:08:23.937 "bdev_raid_get_bdevs", 00:08:23.937 "bdev_error_inject_error", 00:08:23.937 "bdev_error_delete", 00:08:23.937 "bdev_error_create", 00:08:23.937 "bdev_split_delete", 00:08:23.937 "bdev_split_create", 00:08:23.937 "bdev_delay_delete", 00:08:23.937 "bdev_delay_create", 00:08:23.937 "bdev_delay_update_latency", 00:08:23.937 "bdev_zone_block_delete", 00:08:23.937 "bdev_zone_block_create", 00:08:23.937 "blobfs_create", 00:08:23.937 "blobfs_detect", 00:08:23.937 "blobfs_set_cache_size", 00:08:23.937 "bdev_xnvme_delete", 00:08:23.937 "bdev_xnvme_create", 00:08:23.937 "bdev_aio_delete", 00:08:23.937 "bdev_aio_rescan", 00:08:23.937 "bdev_aio_create", 00:08:23.937 "bdev_ftl_set_property", 00:08:23.937 "bdev_ftl_get_properties", 00:08:23.937 "bdev_ftl_get_stats", 00:08:23.937 "bdev_ftl_unmap", 00:08:23.937 "bdev_ftl_unload", 00:08:23.937 "bdev_ftl_delete", 00:08:23.937 "bdev_ftl_load", 00:08:23.937 "bdev_ftl_create", 00:08:23.937 "bdev_virtio_attach_controller", 00:08:23.937 "bdev_virtio_scsi_get_devices", 00:08:23.937 "bdev_virtio_detach_controller", 00:08:23.937 "bdev_virtio_blk_set_hotplug", 00:08:23.937 "bdev_iscsi_delete", 00:08:23.937 "bdev_iscsi_create", 00:08:23.937 "bdev_iscsi_set_options", 00:08:23.937 "accel_error_inject_error", 00:08:23.937 "ioat_scan_accel_module", 00:08:23.937 "dsa_scan_accel_module", 00:08:23.937 "iaa_scan_accel_module", 00:08:23.937 "keyring_file_remove_key", 00:08:23.937 "keyring_file_add_key", 00:08:23.937 "keyring_linux_set_options", 00:08:23.937 "iscsi_get_histogram", 00:08:23.937 "iscsi_enable_histogram", 00:08:23.937 "iscsi_set_options", 00:08:23.937 "iscsi_get_auth_groups", 00:08:23.937 "iscsi_auth_group_remove_secret", 00:08:23.937 "iscsi_auth_group_add_secret", 00:08:23.937 "iscsi_delete_auth_group", 00:08:23.937 "iscsi_create_auth_group", 00:08:23.937 "iscsi_set_discovery_auth", 00:08:23.937 "iscsi_get_options", 00:08:23.937 "iscsi_target_node_request_logout", 00:08:23.937 "iscsi_target_node_set_redirect", 00:08:23.937 "iscsi_target_node_set_auth", 00:08:23.937 "iscsi_target_node_add_lun", 00:08:23.937 "iscsi_get_stats", 00:08:23.937 "iscsi_get_connections", 00:08:23.937 "iscsi_portal_group_set_auth", 00:08:23.937 "iscsi_start_portal_group", 00:08:23.937 "iscsi_delete_portal_group", 00:08:23.937 "iscsi_create_portal_group", 00:08:23.937 "iscsi_get_portal_groups", 00:08:23.937 "iscsi_delete_target_node", 00:08:23.937 "iscsi_target_node_remove_pg_ig_maps", 00:08:23.937 "iscsi_target_node_add_pg_ig_maps", 00:08:23.937 "iscsi_create_target_node", 00:08:23.937 "iscsi_get_target_nodes", 00:08:23.937 "iscsi_delete_initiator_group", 00:08:23.937 "iscsi_initiator_group_remove_initiators", 00:08:23.937 "iscsi_initiator_group_add_initiators", 00:08:23.937 "iscsi_create_initiator_group", 00:08:23.937 "iscsi_get_initiator_groups", 00:08:23.937 "nvmf_set_crdt", 00:08:23.937 "nvmf_set_config", 00:08:23.937 "nvmf_set_max_subsystems", 00:08:23.937 "nvmf_stop_mdns_prr", 00:08:23.937 "nvmf_publish_mdns_prr", 00:08:23.937 "nvmf_subsystem_get_listeners", 00:08:23.937 "nvmf_subsystem_get_qpairs", 00:08:23.937 "nvmf_subsystem_get_controllers", 00:08:23.937 "nvmf_get_stats", 00:08:23.937 "nvmf_get_transports", 00:08:23.937 "nvmf_create_transport", 00:08:23.937 "nvmf_get_targets", 00:08:23.937 "nvmf_delete_target", 00:08:23.937 "nvmf_create_target", 00:08:23.937 "nvmf_subsystem_allow_any_host", 00:08:23.937 "nvmf_subsystem_remove_host", 00:08:23.937 "nvmf_subsystem_add_host", 00:08:23.937 "nvmf_ns_remove_host", 00:08:23.937 "nvmf_ns_add_host", 00:08:23.937 "nvmf_subsystem_remove_ns", 00:08:23.937 "nvmf_subsystem_add_ns", 00:08:23.937 "nvmf_subsystem_listener_set_ana_state", 00:08:23.937 "nvmf_discovery_get_referrals", 00:08:23.937 "nvmf_discovery_remove_referral", 00:08:23.937 "nvmf_discovery_add_referral", 00:08:23.937 "nvmf_subsystem_remove_listener", 00:08:23.937 "nvmf_subsystem_add_listener", 00:08:23.937 "nvmf_delete_subsystem", 00:08:23.937 "nvmf_create_subsystem", 00:08:23.937 "nvmf_get_subsystems", 00:08:23.937 "env_dpdk_get_mem_stats", 00:08:23.937 "nbd_get_disks", 00:08:23.937 "nbd_stop_disk", 00:08:23.937 "nbd_start_disk", 00:08:23.937 "ublk_recover_disk", 00:08:23.937 "ublk_get_disks", 00:08:23.937 "ublk_stop_disk", 00:08:23.937 "ublk_start_disk", 00:08:23.937 "ublk_destroy_target", 00:08:23.937 "ublk_create_target", 00:08:23.937 "virtio_blk_create_transport", 00:08:23.937 "virtio_blk_get_transports", 00:08:23.937 "vhost_controller_set_coalescing", 00:08:23.937 "vhost_get_controllers", 00:08:23.937 "vhost_delete_controller", 00:08:23.937 "vhost_create_blk_controller", 00:08:23.937 "vhost_scsi_controller_remove_target", 00:08:23.937 "vhost_scsi_controller_add_target", 00:08:23.937 "vhost_start_scsi_controller", 00:08:23.937 "vhost_create_scsi_controller", 00:08:23.937 "thread_set_cpumask", 00:08:23.937 "framework_get_governor", 00:08:23.937 "framework_get_scheduler", 00:08:23.937 "framework_set_scheduler", 00:08:23.937 "framework_get_reactors", 00:08:23.937 "thread_get_io_channels", 00:08:23.937 "thread_get_pollers", 00:08:23.937 "thread_get_stats", 00:08:23.937 "framework_monitor_context_switch", 00:08:23.937 "spdk_kill_instance", 00:08:23.937 "log_enable_timestamps", 00:08:23.937 "log_get_flags", 00:08:23.937 "log_clear_flag", 00:08:23.937 "log_set_flag", 00:08:23.937 "log_get_level", 00:08:23.937 "log_set_level", 00:08:23.937 "log_get_print_level", 00:08:23.937 "log_set_print_level", 00:08:23.937 "framework_enable_cpumask_locks", 00:08:23.937 "framework_disable_cpumask_locks", 00:08:23.937 "framework_wait_init", 00:08:23.937 "framework_start_init", 00:08:23.937 "scsi_get_devices", 00:08:23.937 "bdev_get_histogram", 00:08:23.937 "bdev_enable_histogram", 00:08:23.937 "bdev_set_qos_limit", 00:08:23.937 "bdev_set_qd_sampling_period", 00:08:23.937 "bdev_get_bdevs", 00:08:23.937 "bdev_reset_iostat", 00:08:23.937 "bdev_get_iostat", 00:08:23.937 "bdev_examine", 00:08:23.937 "bdev_wait_for_examine", 00:08:23.937 "bdev_set_options", 00:08:23.937 "notify_get_notifications", 00:08:23.937 "notify_get_types", 00:08:23.937 "accel_get_stats", 00:08:23.937 "accel_set_options", 00:08:23.937 "accel_set_driver", 00:08:23.937 "accel_crypto_key_destroy", 00:08:23.937 "accel_crypto_keys_get", 00:08:23.937 "accel_crypto_key_create", 00:08:23.937 "accel_assign_opc", 00:08:23.937 "accel_get_module_info", 00:08:23.937 "accel_get_opc_assignments", 00:08:23.937 "vmd_rescan", 00:08:23.937 "vmd_remove_device", 00:08:23.937 "vmd_enable", 00:08:23.937 "sock_get_default_impl", 00:08:23.937 "sock_set_default_impl", 00:08:23.937 "sock_impl_set_options", 00:08:23.937 "sock_impl_get_options", 00:08:23.937 "iobuf_get_stats", 00:08:23.937 "iobuf_set_options", 00:08:23.937 "framework_get_pci_devices", 00:08:23.937 "framework_get_config", 00:08:23.937 "framework_get_subsystems", 00:08:23.937 "trace_get_info", 00:08:23.937 "trace_get_tpoint_group_mask", 00:08:23.937 "trace_disable_tpoint_group", 00:08:23.937 "trace_enable_tpoint_group", 00:08:23.937 "trace_clear_tpoint_mask", 00:08:23.937 "trace_set_tpoint_mask", 00:08:23.937 "keyring_get_keys", 00:08:23.937 "spdk_get_version", 00:08:23.937 "rpc_get_methods" 00:08:23.937 ] 00:08:23.937 13:49:48 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:23.937 13:49:48 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:23.937 13:49:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:23.937 13:49:48 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:23.937 13:49:48 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 63340 00:08:23.937 13:49:48 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 63340 ']' 00:08:23.937 13:49:48 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 63340 00:08:23.937 13:49:48 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:08:23.937 13:49:48 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:23.937 13:49:48 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63340 00:08:23.937 13:49:48 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:23.937 killing process with pid 63340 00:08:23.937 13:49:48 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:23.937 13:49:48 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63340' 00:08:23.937 13:49:48 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 63340 00:08:23.937 13:49:48 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 63340 00:08:26.485 00:08:26.485 real 0m4.004s 00:08:26.485 user 0m7.196s 00:08:26.486 sys 0m0.511s 00:08:26.486 ************************************ 00:08:26.486 END TEST spdkcli_tcp 00:08:26.486 ************************************ 00:08:26.486 13:49:50 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.486 13:49:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:26.486 13:49:50 -- common/autotest_common.sh@1142 -- # return 0 00:08:26.486 13:49:50 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:26.486 13:49:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:26.486 13:49:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.486 13:49:50 -- common/autotest_common.sh@10 -- # set +x 00:08:26.486 ************************************ 00:08:26.486 START TEST dpdk_mem_utility 00:08:26.486 ************************************ 00:08:26.486 13:49:50 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:26.486 * Looking for test storage... 00:08:26.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:26.486 13:49:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:26.486 13:49:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=63455 00:08:26.486 13:49:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:26.486 13:49:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 63455 00:08:26.486 13:49:50 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 63455 ']' 00:08:26.486 13:49:50 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.486 13:49:50 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:26.486 13:49:50 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.486 13:49:50 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:26.486 13:49:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:26.486 [2024-07-15 13:49:50.789566] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:26.486 [2024-07-15 13:49:50.789756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63455 ] 00:08:26.486 [2024-07-15 13:49:51.000062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.743 [2024-07-15 13:49:51.195643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.676 13:49:51 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:27.676 13:49:51 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:08:27.676 13:49:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:27.676 13:49:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:27.677 13:49:51 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.677 13:49:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:27.677 { 00:08:27.677 "filename": "/tmp/spdk_mem_dump.txt" 00:08:27.677 } 00:08:27.677 13:49:51 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.677 13:49:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:27.677 DPDK memory size 820.000000 MiB in 1 heap(s) 00:08:27.677 1 heaps totaling size 820.000000 MiB 00:08:27.677 size: 820.000000 MiB heap id: 0 00:08:27.677 end heaps---------- 00:08:27.677 8 mempools totaling size 598.116089 MiB 00:08:27.677 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:27.677 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:27.677 size: 84.521057 MiB name: bdev_io_63455 00:08:27.677 size: 51.011292 MiB name: evtpool_63455 00:08:27.677 size: 50.003479 MiB name: msgpool_63455 00:08:27.677 size: 21.763794 MiB name: PDU_Pool 00:08:27.677 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:27.677 size: 0.026123 MiB name: Session_Pool 00:08:27.677 end mempools------- 00:08:27.677 6 memzones totaling size 4.142822 MiB 00:08:27.677 size: 1.000366 MiB name: RG_ring_0_63455 00:08:27.677 size: 1.000366 MiB name: RG_ring_1_63455 00:08:27.677 size: 1.000366 MiB name: RG_ring_4_63455 00:08:27.677 size: 1.000366 MiB name: RG_ring_5_63455 00:08:27.677 size: 0.125366 MiB name: RG_ring_2_63455 00:08:27.677 size: 0.015991 MiB name: RG_ring_3_63455 00:08:27.677 end memzones------- 00:08:27.677 13:49:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:27.677 heap id: 0 total size: 820.000000 MiB number of busy elements: 298 number of free elements: 18 00:08:27.677 list of free elements. size: 18.452026 MiB 00:08:27.677 element at address: 0x200000400000 with size: 1.999451 MiB 00:08:27.677 element at address: 0x200000800000 with size: 1.996887 MiB 00:08:27.677 element at address: 0x200007000000 with size: 1.995972 MiB 00:08:27.677 element at address: 0x20000b200000 with size: 1.995972 MiB 00:08:27.677 element at address: 0x200019100040 with size: 0.999939 MiB 00:08:27.677 element at address: 0x200019500040 with size: 0.999939 MiB 00:08:27.677 element at address: 0x200019600000 with size: 0.999084 MiB 00:08:27.677 element at address: 0x200003e00000 with size: 0.996094 MiB 00:08:27.677 element at address: 0x200032200000 with size: 0.994324 MiB 00:08:27.677 element at address: 0x200018e00000 with size: 0.959656 MiB 00:08:27.677 element at address: 0x200019900040 with size: 0.936401 MiB 00:08:27.677 element at address: 0x200000200000 with size: 0.830200 MiB 00:08:27.677 element at address: 0x20001b000000 with size: 0.564636 MiB 00:08:27.677 element at address: 0x200019200000 with size: 0.487976 MiB 00:08:27.677 element at address: 0x200019a00000 with size: 0.485413 MiB 00:08:27.677 element at address: 0x200013800000 with size: 0.467651 MiB 00:08:27.677 element at address: 0x200028400000 with size: 0.390442 MiB 00:08:27.677 element at address: 0x200003a00000 with size: 0.351990 MiB 00:08:27.677 list of standard malloc elements. size: 199.283569 MiB 00:08:27.677 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:08:27.677 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:08:27.677 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:08:27.677 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:08:27.677 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:08:27.677 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:08:27.677 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:08:27.677 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:08:27.677 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:08:27.677 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:08:27.677 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:08:27.677 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:08:27.677 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:08:27.677 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:08:27.677 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:08:27.677 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:08:27.677 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:08:27.677 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:08:27.677 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:08:27.677 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:08:27.677 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:08:27.677 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:08:27.677 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:08:27.677 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:08:27.677 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:08:27.677 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:08:27.677 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:08:27.677 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:08:27.677 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:08:27.677 element at address: 0x200003aff980 with size: 0.000244 MiB 00:08:27.677 element at address: 0x200003affa80 with size: 0.000244 MiB 00:08:27.677 element at address: 0x200003eff000 with size: 0.000244 MiB 00:08:27.677 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:08:27.677 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:08:27.677 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:08:27.677 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:08:27.677 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:08:27.677 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:08:27.677 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:08:27.677 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:08:27.677 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:08:27.677 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:08:27.677 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:08:27.677 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:08:27.677 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:08:27.677 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:08:27.677 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:08:27.677 element at address: 0x200013877b80 with size: 0.000244 MiB 00:08:27.677 element at address: 0x200013877c80 with size: 0.000244 MiB 00:08:27.677 element at address: 0x200013877d80 with size: 0.000244 MiB 00:08:27.677 element at address: 0x200013877e80 with size: 0.000244 MiB 00:08:27.677 element at address: 0x200013877f80 with size: 0.000244 MiB 00:08:27.677 element at address: 0x200013878080 with size: 0.000244 MiB 00:08:27.678 element at address: 0x200013878180 with size: 0.000244 MiB 00:08:27.678 element at address: 0x200013878280 with size: 0.000244 MiB 00:08:27.678 element at address: 0x200013878380 with size: 0.000244 MiB 00:08:27.678 element at address: 0x200013878480 with size: 0.000244 MiB 00:08:27.678 element at address: 0x200013878580 with size: 0.000244 MiB 00:08:27.678 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:08:27.678 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:08:27.678 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x200019abc680 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:08:27.678 element at address: 0x200028463f40 with size: 0.000244 MiB 00:08:27.678 element at address: 0x200028464040 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846af80 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846b080 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846b180 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846b280 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846b380 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846b480 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846b580 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846b680 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846b780 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846b880 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846b980 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846be80 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846c080 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846c180 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846c280 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846c380 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846c480 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846c580 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846c680 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846c780 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846c880 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846c980 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846d080 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846d180 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846d280 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846d380 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846d480 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846d580 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846d680 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846d780 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846d880 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846d980 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846da80 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846db80 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846de80 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846df80 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846e080 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846e180 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846e280 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846e380 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846e480 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846e580 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846e680 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846e780 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846e880 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846e980 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:08:27.678 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:08:27.679 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:08:27.679 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:08:27.679 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:08:27.679 element at address: 0x20002846f080 with size: 0.000244 MiB 00:08:27.679 element at address: 0x20002846f180 with size: 0.000244 MiB 00:08:27.679 element at address: 0x20002846f280 with size: 0.000244 MiB 00:08:27.679 element at address: 0x20002846f380 with size: 0.000244 MiB 00:08:27.679 element at address: 0x20002846f480 with size: 0.000244 MiB 00:08:27.679 element at address: 0x20002846f580 with size: 0.000244 MiB 00:08:27.679 element at address: 0x20002846f680 with size: 0.000244 MiB 00:08:27.679 element at address: 0x20002846f780 with size: 0.000244 MiB 00:08:27.679 element at address: 0x20002846f880 with size: 0.000244 MiB 00:08:27.679 element at address: 0x20002846f980 with size: 0.000244 MiB 00:08:27.679 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:08:27.679 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:08:27.679 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:08:27.679 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:08:27.679 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:08:27.679 list of memzone associated elements. size: 602.264404 MiB 00:08:27.679 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:08:27.679 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:27.679 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:08:27.679 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:27.679 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:08:27.679 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_63455_0 00:08:27.679 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:08:27.679 associated memzone info: size: 48.002930 MiB name: MP_evtpool_63455_0 00:08:27.679 element at address: 0x200003fff340 with size: 48.003113 MiB 00:08:27.679 associated memzone info: size: 48.002930 MiB name: MP_msgpool_63455_0 00:08:27.679 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:08:27.679 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:27.679 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:08:27.679 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:27.679 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:08:27.679 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_63455 00:08:27.679 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:08:27.679 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_63455 00:08:27.679 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:08:27.679 associated memzone info: size: 1.007996 MiB name: MP_evtpool_63455 00:08:27.679 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:08:27.679 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:27.679 element at address: 0x200019abc780 with size: 1.008179 MiB 00:08:27.679 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:27.679 element at address: 0x200018efde00 with size: 1.008179 MiB 00:08:27.679 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:27.679 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:08:27.679 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:27.679 element at address: 0x200003eff100 with size: 1.000549 MiB 00:08:27.679 associated memzone info: size: 1.000366 MiB name: RG_ring_0_63455 00:08:27.679 element at address: 0x200003affb80 with size: 1.000549 MiB 00:08:27.679 associated memzone info: size: 1.000366 MiB name: RG_ring_1_63455 00:08:27.679 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:08:27.679 associated memzone info: size: 1.000366 MiB name: RG_ring_4_63455 00:08:27.679 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:08:27.679 associated memzone info: size: 1.000366 MiB name: RG_ring_5_63455 00:08:27.679 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:08:27.679 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_63455 00:08:27.679 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:08:27.679 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:27.679 element at address: 0x200013878680 with size: 0.500549 MiB 00:08:27.679 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:27.679 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:08:27.679 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:27.679 element at address: 0x200003adf740 with size: 0.125549 MiB 00:08:27.679 associated memzone info: size: 0.125366 MiB name: RG_ring_2_63455 00:08:27.679 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:08:27.679 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:27.679 element at address: 0x200028464140 with size: 0.023804 MiB 00:08:27.679 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:27.679 element at address: 0x200003adb500 with size: 0.016174 MiB 00:08:27.679 associated memzone info: size: 0.015991 MiB name: RG_ring_3_63455 00:08:27.679 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:08:27.679 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:27.679 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:08:27.679 associated memzone info: size: 0.000183 MiB name: MP_msgpool_63455 00:08:27.679 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:08:27.679 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_63455 00:08:27.679 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:08:27.679 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:27.679 13:49:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:27.679 13:49:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 63455 00:08:27.679 13:49:52 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 63455 ']' 00:08:27.679 13:49:52 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 63455 00:08:27.679 13:49:52 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:08:27.679 13:49:52 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:27.679 13:49:52 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63455 00:08:27.679 13:49:52 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:27.679 13:49:52 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:27.679 killing process with pid 63455 00:08:27.679 13:49:52 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63455' 00:08:27.679 13:49:52 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 63455 00:08:27.679 13:49:52 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 63455 00:08:30.250 00:08:30.250 real 0m3.720s 00:08:30.250 user 0m3.951s 00:08:30.250 sys 0m0.488s 00:08:30.250 13:49:54 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.250 13:49:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:30.250 ************************************ 00:08:30.250 END TEST dpdk_mem_utility 00:08:30.250 ************************************ 00:08:30.250 13:49:54 -- common/autotest_common.sh@1142 -- # return 0 00:08:30.250 13:49:54 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:30.250 13:49:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:30.250 13:49:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.250 13:49:54 -- common/autotest_common.sh@10 -- # set +x 00:08:30.250 ************************************ 00:08:30.250 START TEST event 00:08:30.250 ************************************ 00:08:30.250 13:49:54 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:30.250 * Looking for test storage... 00:08:30.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:30.250 13:49:54 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:30.250 13:49:54 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:30.250 13:49:54 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:30.250 13:49:54 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:30.250 13:49:54 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.250 13:49:54 event -- common/autotest_common.sh@10 -- # set +x 00:08:30.250 ************************************ 00:08:30.250 START TEST event_perf 00:08:30.250 ************************************ 00:08:30.250 13:49:54 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:30.250 Running I/O for 1 seconds...[2024-07-15 13:49:54.472582] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:30.250 [2024-07-15 13:49:54.472710] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63550 ] 00:08:30.250 [2024-07-15 13:49:54.636822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:30.509 Running I/O for 1 seconds...[2024-07-15 13:49:54.838769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.509 [2024-07-15 13:49:54.838879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.509 [2024-07-15 13:49:54.839050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.509 [2024-07-15 13:49:54.839060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.883 00:08:31.883 lcore 0: 178859 00:08:31.883 lcore 1: 178859 00:08:31.883 lcore 2: 178860 00:08:31.883 lcore 3: 178857 00:08:31.883 done. 00:08:31.883 00:08:31.883 real 0m1.815s 00:08:31.883 user 0m4.593s 00:08:31.883 sys 0m0.097s 00:08:31.883 13:49:56 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.883 13:49:56 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:31.883 ************************************ 00:08:31.883 END TEST event_perf 00:08:31.883 ************************************ 00:08:31.883 13:49:56 event -- common/autotest_common.sh@1142 -- # return 0 00:08:31.883 13:49:56 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:31.883 13:49:56 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:31.883 13:49:56 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.883 13:49:56 event -- common/autotest_common.sh@10 -- # set +x 00:08:31.883 ************************************ 00:08:31.883 START TEST event_reactor 00:08:31.883 ************************************ 00:08:31.883 13:49:56 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:31.883 [2024-07-15 13:49:56.339259] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:31.883 [2024-07-15 13:49:56.339438] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63595 ] 00:08:32.141 [2024-07-15 13:49:56.509926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.400 [2024-07-15 13:49:56.697829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.820 test_start 00:08:33.820 oneshot 00:08:33.820 tick 100 00:08:33.820 tick 100 00:08:33.820 tick 250 00:08:33.820 tick 100 00:08:33.820 tick 100 00:08:33.820 tick 250 00:08:33.820 tick 100 00:08:33.820 tick 500 00:08:33.820 tick 100 00:08:33.820 tick 100 00:08:33.820 tick 250 00:08:33.820 tick 100 00:08:33.820 tick 100 00:08:33.820 test_end 00:08:33.820 00:08:33.820 real 0m1.851s 00:08:33.820 user 0m1.635s 00:08:33.820 sys 0m0.103s 00:08:33.820 13:49:58 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.820 ************************************ 00:08:33.820 END TEST event_reactor 00:08:33.820 ************************************ 00:08:33.820 13:49:58 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:33.820 13:49:58 event -- common/autotest_common.sh@1142 -- # return 0 00:08:33.820 13:49:58 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:33.820 13:49:58 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:33.820 13:49:58 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.820 13:49:58 event -- common/autotest_common.sh@10 -- # set +x 00:08:33.820 ************************************ 00:08:33.820 START TEST event_reactor_perf 00:08:33.820 ************************************ 00:08:33.820 13:49:58 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:33.820 [2024-07-15 13:49:58.228626] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:33.820 [2024-07-15 13:49:58.229451] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63637 ] 00:08:34.079 [2024-07-15 13:49:58.403538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.079 [2024-07-15 13:49:58.596564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.455 test_start 00:08:35.455 test_end 00:08:35.455 Performance: 275219 events per second 00:08:35.455 00:08:35.455 real 0m1.802s 00:08:35.455 user 0m1.587s 00:08:35.455 sys 0m0.103s 00:08:35.455 13:49:59 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:35.455 13:49:59 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:35.455 ************************************ 00:08:35.455 END TEST event_reactor_perf 00:08:35.455 ************************************ 00:08:35.716 13:50:00 event -- common/autotest_common.sh@1142 -- # return 0 00:08:35.717 13:50:00 event -- event/event.sh@49 -- # uname -s 00:08:35.717 13:50:00 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:35.717 13:50:00 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:35.717 13:50:00 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:35.717 13:50:00 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.717 13:50:00 event -- common/autotest_common.sh@10 -- # set +x 00:08:35.717 ************************************ 00:08:35.717 START TEST event_scheduler 00:08:35.717 ************************************ 00:08:35.717 13:50:00 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:35.717 * Looking for test storage... 00:08:35.717 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:35.717 13:50:00 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:35.717 13:50:00 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:35.717 13:50:00 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=63705 00:08:35.717 13:50:00 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:35.717 13:50:00 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 63705 00:08:35.717 13:50:00 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 63705 ']' 00:08:35.717 13:50:00 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.717 13:50:00 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:35.717 13:50:00 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.717 13:50:00 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:35.717 13:50:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:35.717 [2024-07-15 13:50:00.217775] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:35.717 [2024-07-15 13:50:00.218589] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63705 ] 00:08:35.976 [2024-07-15 13:50:00.393788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:36.235 [2024-07-15 13:50:00.626079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.235 [2024-07-15 13:50:00.626158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.235 [2024-07-15 13:50:00.626269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.235 [2024-07-15 13:50:00.626281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:36.803 13:50:01 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:36.803 13:50:01 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:08:36.803 13:50:01 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:36.803 13:50:01 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.803 13:50:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:36.803 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:36.803 POWER: Cannot set governor of lcore 0 to userspace 00:08:36.803 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:36.803 POWER: Cannot set governor of lcore 0 to performance 00:08:36.803 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:36.803 POWER: Cannot set governor of lcore 0 to userspace 00:08:36.803 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:36.803 POWER: Cannot set governor of lcore 0 to userspace 00:08:36.803 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:08:36.803 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:36.803 POWER: Unable to set Power Management Environment for lcore 0 00:08:36.803 [2024-07-15 13:50:01.160176] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:08:36.803 [2024-07-15 13:50:01.160199] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:08:36.803 [2024-07-15 13:50:01.160215] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:08:36.803 [2024-07-15 13:50:01.160238] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:36.803 [2024-07-15 13:50:01.160252] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:36.803 [2024-07-15 13:50:01.160264] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:36.803 13:50:01 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.803 13:50:01 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:36.803 13:50:01 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.803 13:50:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:37.140 [2024-07-15 13:50:01.434068] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:37.140 13:50:01 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.140 13:50:01 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:37.140 13:50:01 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:37.140 13:50:01 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.140 13:50:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:37.140 ************************************ 00:08:37.140 START TEST scheduler_create_thread 00:08:37.140 ************************************ 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.140 2 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.140 3 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.140 4 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.140 5 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.140 6 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.140 7 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.140 8 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.140 9 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.140 10 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:37.140 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.141 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.141 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.141 13:50:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:37.141 13:50:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:37.141 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.141 13:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.711 13:50:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.711 00:08:37.711 real 0m0.595s 00:08:37.711 user 0m0.014s 00:08:37.711 sys 0m0.006s 00:08:37.711 13:50:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:37.711 ************************************ 00:08:37.711 END TEST scheduler_create_thread 00:08:37.711 13:50:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.711 ************************************ 00:08:37.711 13:50:02 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:08:37.711 13:50:02 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:37.712 13:50:02 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 63705 00:08:37.712 13:50:02 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 63705 ']' 00:08:37.712 13:50:02 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 63705 00:08:37.712 13:50:02 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:08:37.712 13:50:02 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:37.712 13:50:02 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63705 00:08:37.712 13:50:02 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:08:37.712 13:50:02 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:08:37.712 killing process with pid 63705 00:08:37.712 13:50:02 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63705' 00:08:37.712 13:50:02 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 63705 00:08:37.712 13:50:02 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 63705 00:08:38.279 [2024-07-15 13:50:02.521490] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:39.214 00:08:39.214 real 0m3.634s 00:08:39.214 user 0m6.774s 00:08:39.214 sys 0m0.399s 00:08:39.214 13:50:03 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.214 13:50:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:39.214 ************************************ 00:08:39.214 END TEST event_scheduler 00:08:39.214 ************************************ 00:08:39.214 13:50:03 event -- common/autotest_common.sh@1142 -- # return 0 00:08:39.214 13:50:03 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:39.214 13:50:03 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:39.214 13:50:03 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:39.214 13:50:03 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.214 13:50:03 event -- common/autotest_common.sh@10 -- # set +x 00:08:39.214 ************************************ 00:08:39.214 START TEST app_repeat 00:08:39.214 ************************************ 00:08:39.214 13:50:03 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:08:39.214 13:50:03 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:39.214 13:50:03 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:39.214 13:50:03 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:39.214 13:50:03 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:39.214 13:50:03 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:39.214 13:50:03 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:39.214 13:50:03 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:39.214 13:50:03 event.app_repeat -- event/event.sh@19 -- # repeat_pid=63789 00:08:39.214 13:50:03 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:39.214 13:50:03 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:39.214 Process app_repeat pid: 63789 00:08:39.214 13:50:03 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 63789' 00:08:39.214 13:50:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:39.214 spdk_app_start Round 0 00:08:39.215 13:50:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:39.215 13:50:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63789 /var/tmp/spdk-nbd.sock 00:08:39.215 13:50:03 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63789 ']' 00:08:39.215 13:50:03 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:39.215 13:50:03 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:39.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:39.215 13:50:03 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:39.215 13:50:03 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:39.215 13:50:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:39.471 [2024-07-15 13:50:03.782900] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:39.471 [2024-07-15 13:50:03.783060] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63789 ] 00:08:39.471 [2024-07-15 13:50:03.945606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:39.729 [2024-07-15 13:50:04.171695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.729 [2024-07-15 13:50:04.171704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.295 13:50:04 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:40.295 13:50:04 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:08:40.295 13:50:04 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:40.862 Malloc0 00:08:40.862 13:50:05 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:41.119 Malloc1 00:08:41.119 13:50:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:41.119 13:50:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.119 13:50:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:41.119 13:50:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:41.119 13:50:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:41.119 13:50:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:41.119 13:50:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:41.119 13:50:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.119 13:50:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:41.119 13:50:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:41.119 13:50:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:41.119 13:50:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:41.119 13:50:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:41.119 13:50:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:41.119 13:50:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:41.119 13:50:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:41.377 /dev/nbd0 00:08:41.377 13:50:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:41.377 13:50:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:41.377 13:50:05 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:41.377 13:50:05 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:08:41.377 13:50:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:41.377 13:50:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:41.377 13:50:05 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:41.377 13:50:05 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:08:41.377 13:50:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:41.377 13:50:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:41.377 13:50:05 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:41.377 1+0 records in 00:08:41.377 1+0 records out 00:08:41.377 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112281 s, 3.6 MB/s 00:08:41.377 13:50:05 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:41.377 13:50:05 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:08:41.377 13:50:05 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:41.377 13:50:05 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:41.377 13:50:05 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:08:41.377 13:50:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:41.377 13:50:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:41.377 13:50:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:41.941 /dev/nbd1 00:08:41.941 13:50:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:41.941 13:50:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:41.941 13:50:06 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:41.941 13:50:06 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:08:41.941 13:50:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:41.941 13:50:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:41.941 13:50:06 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:41.941 13:50:06 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:08:41.941 13:50:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:41.941 13:50:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:41.941 13:50:06 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:41.941 1+0 records in 00:08:41.941 1+0 records out 00:08:41.941 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386912 s, 10.6 MB/s 00:08:41.941 13:50:06 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:41.941 13:50:06 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:08:41.941 13:50:06 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:41.941 13:50:06 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:41.941 13:50:06 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:08:41.941 13:50:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:41.941 13:50:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:41.941 13:50:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:41.941 13:50:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.941 13:50:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:42.199 { 00:08:42.199 "nbd_device": "/dev/nbd0", 00:08:42.199 "bdev_name": "Malloc0" 00:08:42.199 }, 00:08:42.199 { 00:08:42.199 "nbd_device": "/dev/nbd1", 00:08:42.199 "bdev_name": "Malloc1" 00:08:42.199 } 00:08:42.199 ]' 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:42.199 { 00:08:42.199 "nbd_device": "/dev/nbd0", 00:08:42.199 "bdev_name": "Malloc0" 00:08:42.199 }, 00:08:42.199 { 00:08:42.199 "nbd_device": "/dev/nbd1", 00:08:42.199 "bdev_name": "Malloc1" 00:08:42.199 } 00:08:42.199 ]' 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:42.199 /dev/nbd1' 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:42.199 /dev/nbd1' 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:42.199 256+0 records in 00:08:42.199 256+0 records out 00:08:42.199 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00905927 s, 116 MB/s 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:42.199 256+0 records in 00:08:42.199 256+0 records out 00:08:42.199 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0272652 s, 38.5 MB/s 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:42.199 256+0 records in 00:08:42.199 256+0 records out 00:08:42.199 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0365822 s, 28.7 MB/s 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:42.199 13:50:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:42.457 13:50:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:42.457 13:50:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:42.457 13:50:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:42.457 13:50:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:42.457 13:50:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:42.457 13:50:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:42.457 13:50:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:42.457 13:50:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:42.457 13:50:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:42.457 13:50:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:42.715 13:50:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:42.715 13:50:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:42.715 13:50:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:42.715 13:50:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:42.715 13:50:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:42.715 13:50:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:42.715 13:50:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:42.715 13:50:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:42.715 13:50:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:42.715 13:50:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:42.715 13:50:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:43.280 13:50:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:43.281 13:50:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:43.281 13:50:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:43.281 13:50:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:43.281 13:50:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:43.281 13:50:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:43.281 13:50:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:43.281 13:50:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:43.281 13:50:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:43.281 13:50:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:43.281 13:50:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:43.281 13:50:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:43.281 13:50:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:43.538 13:50:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:44.910 [2024-07-15 13:50:09.250556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:45.168 [2024-07-15 13:50:09.505276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.168 [2024-07-15 13:50:09.505294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.168 [2024-07-15 13:50:09.709786] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:45.168 [2024-07-15 13:50:09.709909] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:46.540 spdk_app_start Round 1 00:08:46.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:46.540 13:50:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:46.540 13:50:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:46.540 13:50:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63789 /var/tmp/spdk-nbd.sock 00:08:46.540 13:50:11 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63789 ']' 00:08:46.540 13:50:11 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:46.540 13:50:11 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:46.541 13:50:11 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:46.541 13:50:11 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:46.541 13:50:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:47.105 13:50:11 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:47.105 13:50:11 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:08:47.105 13:50:11 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:47.397 Malloc0 00:08:47.397 13:50:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:47.676 Malloc1 00:08:47.677 13:50:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:47.677 13:50:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.677 13:50:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:47.677 13:50:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:47.677 13:50:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:47.677 13:50:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:47.677 13:50:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:47.677 13:50:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.677 13:50:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:47.677 13:50:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:47.677 13:50:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:47.677 13:50:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:47.677 13:50:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:47.677 13:50:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:47.677 13:50:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:47.677 13:50:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:47.934 /dev/nbd0 00:08:47.934 13:50:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:47.934 13:50:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:47.934 13:50:12 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:47.934 13:50:12 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:08:47.934 13:50:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:47.934 13:50:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:47.934 13:50:12 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:47.934 13:50:12 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:08:47.934 13:50:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:47.934 13:50:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:47.934 13:50:12 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:47.934 1+0 records in 00:08:47.934 1+0 records out 00:08:47.934 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266133 s, 15.4 MB/s 00:08:47.934 13:50:12 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:47.934 13:50:12 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:08:47.934 13:50:12 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:47.934 13:50:12 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:47.934 13:50:12 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:08:47.934 13:50:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:47.934 13:50:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:47.934 13:50:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:48.499 /dev/nbd1 00:08:48.499 13:50:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:48.499 13:50:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:48.499 13:50:12 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:48.499 13:50:12 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:08:48.499 13:50:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:48.499 13:50:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:48.499 13:50:12 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:48.499 13:50:12 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:08:48.499 13:50:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:48.499 13:50:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:48.499 13:50:12 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:48.499 1+0 records in 00:08:48.499 1+0 records out 00:08:48.499 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000620413 s, 6.6 MB/s 00:08:48.499 13:50:12 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:48.499 13:50:12 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:08:48.499 13:50:12 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:48.499 13:50:12 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:48.499 13:50:12 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:08:48.499 13:50:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:48.499 13:50:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:48.499 13:50:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:48.499 13:50:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.499 13:50:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:48.757 { 00:08:48.757 "nbd_device": "/dev/nbd0", 00:08:48.757 "bdev_name": "Malloc0" 00:08:48.757 }, 00:08:48.757 { 00:08:48.757 "nbd_device": "/dev/nbd1", 00:08:48.757 "bdev_name": "Malloc1" 00:08:48.757 } 00:08:48.757 ]' 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:48.757 { 00:08:48.757 "nbd_device": "/dev/nbd0", 00:08:48.757 "bdev_name": "Malloc0" 00:08:48.757 }, 00:08:48.757 { 00:08:48.757 "nbd_device": "/dev/nbd1", 00:08:48.757 "bdev_name": "Malloc1" 00:08:48.757 } 00:08:48.757 ]' 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:48.757 /dev/nbd1' 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:48.757 /dev/nbd1' 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:48.757 256+0 records in 00:08:48.757 256+0 records out 00:08:48.757 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00949234 s, 110 MB/s 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:48.757 256+0 records in 00:08:48.757 256+0 records out 00:08:48.757 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0305081 s, 34.4 MB/s 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:48.757 256+0 records in 00:08:48.757 256+0 records out 00:08:48.757 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0330481 s, 31.7 MB/s 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.757 13:50:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:49.015 13:50:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:49.015 13:50:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:49.015 13:50:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:49.015 13:50:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.015 13:50:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.015 13:50:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:49.015 13:50:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:49.015 13:50:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.015 13:50:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.015 13:50:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:49.580 13:50:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:49.580 13:50:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:49.580 13:50:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:49.580 13:50:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.580 13:50:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.580 13:50:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:49.580 13:50:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:49.580 13:50:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.580 13:50:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:49.580 13:50:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.580 13:50:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:49.838 13:50:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:49.838 13:50:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:49.838 13:50:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:49.838 13:50:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:49.838 13:50:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:49.838 13:50:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:49.838 13:50:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:49.838 13:50:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:49.838 13:50:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:49.838 13:50:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:49.838 13:50:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:49.838 13:50:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:49.838 13:50:14 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:50.403 13:50:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:51.351 [2024-07-15 13:50:15.857364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:51.609 [2024-07-15 13:50:16.035475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.609 [2024-07-15 13:50:16.035476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.866 [2024-07-15 13:50:16.202727] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:51.866 [2024-07-15 13:50:16.202818] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:53.272 spdk_app_start Round 2 00:08:53.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:53.272 13:50:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:53.272 13:50:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:53.272 13:50:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63789 /var/tmp/spdk-nbd.sock 00:08:53.272 13:50:17 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63789 ']' 00:08:53.272 13:50:17 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:53.272 13:50:17 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:53.272 13:50:17 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:53.272 13:50:17 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:53.272 13:50:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:53.530 13:50:17 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:53.530 13:50:17 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:08:53.530 13:50:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:54.095 Malloc0 00:08:54.095 13:50:18 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:54.354 Malloc1 00:08:54.354 13:50:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:54.354 13:50:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:54.354 13:50:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:54.354 13:50:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:54.354 13:50:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:54.354 13:50:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:54.354 13:50:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:54.354 13:50:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:54.354 13:50:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:54.354 13:50:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:54.354 13:50:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:54.354 13:50:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:54.354 13:50:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:54.354 13:50:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:54.354 13:50:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:54.354 13:50:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:54.612 /dev/nbd0 00:08:54.612 13:50:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:54.612 13:50:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:54.612 13:50:19 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:54.612 13:50:19 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:08:54.612 13:50:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:54.612 13:50:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:54.612 13:50:19 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:54.612 13:50:19 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:08:54.612 13:50:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:54.612 13:50:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:54.612 13:50:19 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:54.612 1+0 records in 00:08:54.612 1+0 records out 00:08:54.612 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002678 s, 15.3 MB/s 00:08:54.612 13:50:19 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:54.612 13:50:19 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:08:54.612 13:50:19 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:54.612 13:50:19 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:54.612 13:50:19 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:08:54.612 13:50:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:54.612 13:50:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:54.612 13:50:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:55.178 /dev/nbd1 00:08:55.178 13:50:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:55.178 13:50:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:55.178 13:50:19 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:55.178 13:50:19 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:08:55.178 13:50:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:55.178 13:50:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:55.178 13:50:19 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:55.178 13:50:19 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:08:55.178 13:50:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:55.178 13:50:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:55.178 13:50:19 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:55.178 1+0 records in 00:08:55.178 1+0 records out 00:08:55.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352794 s, 11.6 MB/s 00:08:55.178 13:50:19 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:55.178 13:50:19 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:08:55.178 13:50:19 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:55.178 13:50:19 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:55.178 13:50:19 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:08:55.178 13:50:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:55.178 13:50:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:55.178 13:50:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:55.179 13:50:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.179 13:50:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:55.436 13:50:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:55.437 { 00:08:55.437 "nbd_device": "/dev/nbd0", 00:08:55.437 "bdev_name": "Malloc0" 00:08:55.437 }, 00:08:55.437 { 00:08:55.437 "nbd_device": "/dev/nbd1", 00:08:55.437 "bdev_name": "Malloc1" 00:08:55.437 } 00:08:55.437 ]' 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:55.437 { 00:08:55.437 "nbd_device": "/dev/nbd0", 00:08:55.437 "bdev_name": "Malloc0" 00:08:55.437 }, 00:08:55.437 { 00:08:55.437 "nbd_device": "/dev/nbd1", 00:08:55.437 "bdev_name": "Malloc1" 00:08:55.437 } 00:08:55.437 ]' 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:55.437 /dev/nbd1' 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:55.437 /dev/nbd1' 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:55.437 256+0 records in 00:08:55.437 256+0 records out 00:08:55.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00725642 s, 145 MB/s 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:55.437 256+0 records in 00:08:55.437 256+0 records out 00:08:55.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307355 s, 34.1 MB/s 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:55.437 256+0 records in 00:08:55.437 256+0 records out 00:08:55.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0344788 s, 30.4 MB/s 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:55.437 13:50:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:55.695 13:50:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:55.695 13:50:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:55.695 13:50:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:55.695 13:50:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:55.695 13:50:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:55.695 13:50:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:55.695 13:50:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:55.695 13:50:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:55.695 13:50:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:55.695 13:50:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:56.260 13:50:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:56.260 13:50:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:56.260 13:50:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:56.260 13:50:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:56.260 13:50:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:56.260 13:50:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:56.260 13:50:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:56.260 13:50:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:56.260 13:50:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:56.260 13:50:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:56.260 13:50:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:56.260 13:50:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:56.260 13:50:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:56.260 13:50:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:56.540 13:50:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:56.540 13:50:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:56.540 13:50:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:56.540 13:50:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:56.540 13:50:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:56.540 13:50:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:56.540 13:50:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:56.540 13:50:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:56.540 13:50:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:56.540 13:50:20 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:56.798 13:50:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:58.171 [2024-07-15 13:50:22.475203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:58.171 [2024-07-15 13:50:22.653448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.171 [2024-07-15 13:50:22.653457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.428 [2024-07-15 13:50:22.819597] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:58.428 [2024-07-15 13:50:22.819668] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:59.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:59.850 13:50:24 event.app_repeat -- event/event.sh@38 -- # waitforlisten 63789 /var/tmp/spdk-nbd.sock 00:08:59.850 13:50:24 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63789 ']' 00:08:59.850 13:50:24 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:59.850 13:50:24 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:59.850 13:50:24 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:59.850 13:50:24 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:59.850 13:50:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:00.135 13:50:24 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:00.135 13:50:24 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:09:00.135 13:50:24 event.app_repeat -- event/event.sh@39 -- # killprocess 63789 00:09:00.135 13:50:24 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 63789 ']' 00:09:00.135 13:50:24 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 63789 00:09:00.135 13:50:24 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:09:00.135 13:50:24 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:00.135 13:50:24 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63789 00:09:00.135 killing process with pid 63789 00:09:00.135 13:50:24 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:00.135 13:50:24 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:00.135 13:50:24 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63789' 00:09:00.135 13:50:24 event.app_repeat -- common/autotest_common.sh@967 -- # kill 63789 00:09:00.135 13:50:24 event.app_repeat -- common/autotest_common.sh@972 -- # wait 63789 00:09:01.509 spdk_app_start is called in Round 0. 00:09:01.509 Shutdown signal received, stop current app iteration 00:09:01.509 Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 reinitialization... 00:09:01.509 spdk_app_start is called in Round 1. 00:09:01.509 Shutdown signal received, stop current app iteration 00:09:01.509 Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 reinitialization... 00:09:01.509 spdk_app_start is called in Round 2. 00:09:01.509 Shutdown signal received, stop current app iteration 00:09:01.509 Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 reinitialization... 00:09:01.509 spdk_app_start is called in Round 3. 00:09:01.509 Shutdown signal received, stop current app iteration 00:09:01.509 13:50:25 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:01.509 13:50:25 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:01.509 00:09:01.509 real 0m21.974s 00:09:01.509 user 0m48.124s 00:09:01.509 sys 0m2.848s 00:09:01.509 13:50:25 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.509 13:50:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:01.509 ************************************ 00:09:01.509 END TEST app_repeat 00:09:01.509 ************************************ 00:09:01.509 13:50:25 event -- common/autotest_common.sh@1142 -- # return 0 00:09:01.509 13:50:25 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:01.509 13:50:25 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:01.509 13:50:25 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:01.509 13:50:25 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.509 13:50:25 event -- common/autotest_common.sh@10 -- # set +x 00:09:01.509 ************************************ 00:09:01.509 START TEST cpu_locks 00:09:01.509 ************************************ 00:09:01.509 13:50:25 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:01.509 * Looking for test storage... 00:09:01.509 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:01.509 13:50:25 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:01.509 13:50:25 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:01.509 13:50:25 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:01.509 13:50:25 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:01.509 13:50:25 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:01.509 13:50:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.509 13:50:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:01.509 ************************************ 00:09:01.509 START TEST default_locks 00:09:01.509 ************************************ 00:09:01.509 13:50:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:09:01.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.509 13:50:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=64257 00:09:01.509 13:50:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 64257 00:09:01.509 13:50:25 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 64257 ']' 00:09:01.509 13:50:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:01.509 13:50:25 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.509 13:50:25 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:01.509 13:50:25 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.509 13:50:25 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:01.509 13:50:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:01.509 [2024-07-15 13:50:25.962023] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:01.509 [2024-07-15 13:50:25.962949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64257 ] 00:09:01.767 [2024-07-15 13:50:26.148659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.026 [2024-07-15 13:50:26.337851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.593 13:50:27 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:02.593 13:50:27 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:09:02.593 13:50:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 64257 00:09:02.593 13:50:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 64257 00:09:02.593 13:50:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:03.165 13:50:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 64257 00:09:03.165 13:50:27 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 64257 ']' 00:09:03.165 13:50:27 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 64257 00:09:03.165 13:50:27 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:09:03.165 13:50:27 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:03.165 13:50:27 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64257 00:09:03.165 killing process with pid 64257 00:09:03.165 13:50:27 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:03.165 13:50:27 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:03.165 13:50:27 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64257' 00:09:03.165 13:50:27 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 64257 00:09:03.165 13:50:27 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 64257 00:09:05.692 13:50:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 64257 00:09:05.692 13:50:29 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:09:05.692 13:50:29 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64257 00:09:05.692 13:50:29 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:09:05.692 13:50:29 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.692 13:50:29 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:09:05.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.692 ERROR: process (pid: 64257) is no longer running 00:09:05.692 13:50:29 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.692 13:50:29 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 64257 00:09:05.692 13:50:29 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 64257 ']' 00:09:05.692 13:50:29 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.692 13:50:29 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:05.692 13:50:29 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.692 13:50:29 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:05.692 13:50:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:05.692 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64257) - No such process 00:09:05.692 13:50:29 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:05.692 13:50:29 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:09:05.692 13:50:29 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:09:05.692 13:50:29 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:05.692 13:50:29 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:05.692 13:50:29 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:05.692 13:50:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:05.692 13:50:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:05.692 13:50:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:05.692 13:50:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:05.692 00:09:05.692 real 0m3.808s 00:09:05.692 user 0m3.896s 00:09:05.692 sys 0m0.611s 00:09:05.692 13:50:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:05.692 ************************************ 00:09:05.692 END TEST default_locks 00:09:05.692 ************************************ 00:09:05.692 13:50:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:05.692 13:50:29 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:09:05.692 13:50:29 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:05.692 13:50:29 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:05.692 13:50:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.692 13:50:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:05.692 ************************************ 00:09:05.692 START TEST default_locks_via_rpc 00:09:05.692 ************************************ 00:09:05.692 13:50:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:09:05.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.692 13:50:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=64332 00:09:05.692 13:50:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 64332 00:09:05.692 13:50:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:05.692 13:50:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64332 ']' 00:09:05.692 13:50:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.692 13:50:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:05.692 13:50:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.692 13:50:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:05.692 13:50:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.692 [2024-07-15 13:50:29.804277] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:05.692 [2024-07-15 13:50:29.804586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64332 ] 00:09:05.692 [2024-07-15 13:50:29.983968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.692 [2024-07-15 13:50:30.192757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.626 13:50:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:06.626 13:50:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:06.626 13:50:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:06.626 13:50:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.626 13:50:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.626 13:50:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.626 13:50:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:06.626 13:50:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:06.626 13:50:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:06.626 13:50:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:06.626 13:50:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:06.626 13:50:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.626 13:50:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.626 13:50:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.626 13:50:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 64332 00:09:06.626 13:50:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 64332 00:09:06.626 13:50:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:06.884 13:50:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 64332 00:09:06.884 13:50:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 64332 ']' 00:09:06.884 13:50:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 64332 00:09:06.884 13:50:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:09:06.884 13:50:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:06.884 13:50:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64332 00:09:06.884 killing process with pid 64332 00:09:06.884 13:50:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:06.884 13:50:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:06.884 13:50:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64332' 00:09:06.884 13:50:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 64332 00:09:06.884 13:50:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 64332 00:09:09.417 ************************************ 00:09:09.417 END TEST default_locks_via_rpc 00:09:09.417 ************************************ 00:09:09.417 00:09:09.417 real 0m3.772s 00:09:09.417 user 0m3.856s 00:09:09.417 sys 0m0.606s 00:09:09.417 13:50:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:09.417 13:50:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.417 13:50:33 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:09:09.417 13:50:33 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:09.417 13:50:33 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:09.417 13:50:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.417 13:50:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:09.417 ************************************ 00:09:09.417 START TEST non_locking_app_on_locked_coremask 00:09:09.417 ************************************ 00:09:09.417 13:50:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:09:09.417 13:50:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=64406 00:09:09.417 13:50:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:09.417 13:50:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 64406 /var/tmp/spdk.sock 00:09:09.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.417 13:50:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64406 ']' 00:09:09.417 13:50:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.417 13:50:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:09.417 13:50:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.417 13:50:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:09.417 13:50:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:09.417 [2024-07-15 13:50:33.625235] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:09.417 [2024-07-15 13:50:33.625441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64406 ] 00:09:09.417 [2024-07-15 13:50:33.795607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.675 [2024-07-15 13:50:34.019979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.243 13:50:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:10.243 13:50:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:09:10.243 13:50:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=64422 00:09:10.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:10.243 13:50:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 64422 /var/tmp/spdk2.sock 00:09:10.243 13:50:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:10.243 13:50:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64422 ']' 00:09:10.243 13:50:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:10.243 13:50:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:10.243 13:50:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:10.243 13:50:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:10.243 13:50:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:10.501 [2024-07-15 13:50:34.859446] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:10.501 [2024-07-15 13:50:34.859597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64422 ] 00:09:10.501 [2024-07-15 13:50:35.035747] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:10.501 [2024-07-15 13:50:35.035856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.067 [2024-07-15 13:50:35.445177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.965 13:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:12.965 13:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:09:12.965 13:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 64406 00:09:12.965 13:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64406 00:09:12.965 13:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:13.898 13:50:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 64406 00:09:13.898 13:50:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64406 ']' 00:09:13.898 13:50:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64406 00:09:13.898 13:50:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:09:13.898 13:50:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:13.898 13:50:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64406 00:09:13.898 killing process with pid 64406 00:09:13.898 13:50:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:13.898 13:50:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:13.898 13:50:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64406' 00:09:13.898 13:50:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64406 00:09:13.898 13:50:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64406 00:09:18.209 13:50:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 64422 00:09:18.209 13:50:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64422 ']' 00:09:18.209 13:50:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64422 00:09:18.209 13:50:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:09:18.209 13:50:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:18.209 13:50:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64422 00:09:18.209 killing process with pid 64422 00:09:18.209 13:50:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:18.209 13:50:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:18.209 13:50:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64422' 00:09:18.209 13:50:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64422 00:09:18.209 13:50:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64422 00:09:20.738 ************************************ 00:09:20.738 END TEST non_locking_app_on_locked_coremask 00:09:20.738 ************************************ 00:09:20.738 00:09:20.738 real 0m11.329s 00:09:20.738 user 0m11.946s 00:09:20.738 sys 0m1.263s 00:09:20.738 13:50:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.738 13:50:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:20.738 13:50:44 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:09:20.738 13:50:44 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:20.738 13:50:44 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:20.738 13:50:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.738 13:50:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:20.738 ************************************ 00:09:20.738 START TEST locking_app_on_unlocked_coremask 00:09:20.738 ************************************ 00:09:20.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.738 13:50:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:09:20.738 13:50:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=64565 00:09:20.738 13:50:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 64565 /var/tmp/spdk.sock 00:09:20.738 13:50:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64565 ']' 00:09:20.738 13:50:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:20.738 13:50:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.738 13:50:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:20.738 13:50:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.738 13:50:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:20.738 13:50:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:20.738 [2024-07-15 13:50:44.988175] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:20.738 [2024-07-15 13:50:44.988373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64565 ] 00:09:20.738 [2024-07-15 13:50:45.155323] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:20.738 [2024-07-15 13:50:45.155390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.996 [2024-07-15 13:50:45.340747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:21.563 13:50:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:21.563 13:50:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:09:21.563 13:50:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=64586 00:09:21.563 13:50:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 64586 /var/tmp/spdk2.sock 00:09:21.563 13:50:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64586 ']' 00:09:21.563 13:50:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:21.563 13:50:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:21.563 13:50:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:21.563 13:50:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:21.563 13:50:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:21.563 13:50:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:21.821 [2024-07-15 13:50:46.171908] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:21.821 [2024-07-15 13:50:46.172092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64586 ] 00:09:21.821 [2024-07-15 13:50:46.352100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.386 [2024-07-15 13:50:46.725538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.912 13:50:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:24.912 13:50:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:09:24.912 13:50:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 64586 00:09:24.912 13:50:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64586 00:09:24.912 13:50:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:25.478 13:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 64565 00:09:25.478 13:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64565 ']' 00:09:25.478 13:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 64565 00:09:25.478 13:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:09:25.478 13:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:25.478 13:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64565 00:09:25.478 killing process with pid 64565 00:09:25.478 13:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:25.478 13:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:25.478 13:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64565' 00:09:25.478 13:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 64565 00:09:25.478 13:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 64565 00:09:29.657 13:50:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 64586 00:09:29.657 13:50:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64586 ']' 00:09:29.657 13:50:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 64586 00:09:29.915 13:50:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:09:29.915 13:50:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:29.915 13:50:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64586 00:09:29.915 killing process with pid 64586 00:09:29.915 13:50:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:29.915 13:50:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:29.915 13:50:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64586' 00:09:29.915 13:50:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 64586 00:09:29.915 13:50:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 64586 00:09:32.438 00:09:32.438 real 0m11.508s 00:09:32.438 user 0m12.307s 00:09:32.438 sys 0m1.251s 00:09:32.438 13:50:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:32.438 13:50:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:32.438 ************************************ 00:09:32.438 END TEST locking_app_on_unlocked_coremask 00:09:32.438 ************************************ 00:09:32.438 13:50:56 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:09:32.438 13:50:56 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:32.438 13:50:56 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:32.438 13:50:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:32.438 13:50:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:32.438 ************************************ 00:09:32.438 START TEST locking_app_on_locked_coremask 00:09:32.438 ************************************ 00:09:32.438 13:50:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:09:32.438 13:50:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=64729 00:09:32.438 13:50:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:32.438 13:50:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 64729 /var/tmp/spdk.sock 00:09:32.438 13:50:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64729 ']' 00:09:32.438 13:50:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.438 13:50:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:32.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.438 13:50:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.438 13:50:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:32.438 13:50:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:32.438 [2024-07-15 13:50:56.566268] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:32.438 [2024-07-15 13:50:56.566489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64729 ] 00:09:32.438 [2024-07-15 13:50:56.749358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.438 [2024-07-15 13:50:56.937746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.382 13:50:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:33.382 13:50:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:09:33.382 13:50:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=64750 00:09:33.382 13:50:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 64750 /var/tmp/spdk2.sock 00:09:33.382 13:50:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:33.382 13:50:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:09:33.382 13:50:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64750 /var/tmp/spdk2.sock 00:09:33.382 13:50:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:09:33.382 13:50:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:33.382 13:50:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:09:33.382 13:50:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:33.382 13:50:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 64750 /var/tmp/spdk2.sock 00:09:33.382 13:50:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64750 ']' 00:09:33.382 13:50:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:33.382 13:50:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:33.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:33.382 13:50:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:33.382 13:50:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:33.382 13:50:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:33.382 [2024-07-15 13:50:57.809249] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:33.382 [2024-07-15 13:50:57.809456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64750 ] 00:09:33.640 [2024-07-15 13:50:57.990808] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 64729 has claimed it. 00:09:33.640 [2024-07-15 13:50:57.990885] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:34.206 ERROR: process (pid: 64750) is no longer running 00:09:34.206 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64750) - No such process 00:09:34.206 13:50:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:34.206 13:50:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:09:34.206 13:50:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:09:34.206 13:50:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:34.206 13:50:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:34.206 13:50:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:34.206 13:50:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 64729 00:09:34.207 13:50:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64729 00:09:34.207 13:50:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:34.207 13:50:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 64729 00:09:34.207 13:50:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64729 ']' 00:09:34.207 13:50:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64729 00:09:34.207 13:50:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:09:34.207 13:50:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:34.207 13:50:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64729 00:09:34.465 13:50:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:34.465 killing process with pid 64729 00:09:34.465 13:50:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:34.465 13:50:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64729' 00:09:34.465 13:50:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64729 00:09:34.465 13:50:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64729 00:09:37.014 00:09:37.014 real 0m4.572s 00:09:37.014 user 0m5.023s 00:09:37.014 sys 0m0.688s 00:09:37.014 13:51:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:37.014 ************************************ 00:09:37.014 END TEST locking_app_on_locked_coremask 00:09:37.014 ************************************ 00:09:37.014 13:51:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:37.014 13:51:01 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:09:37.014 13:51:01 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:37.014 13:51:01 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:37.014 13:51:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:37.014 13:51:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:37.014 ************************************ 00:09:37.014 START TEST locking_overlapped_coremask 00:09:37.014 ************************************ 00:09:37.014 13:51:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:09:37.015 13:51:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=64820 00:09:37.015 13:51:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:37.015 13:51:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 64820 /var/tmp/spdk.sock 00:09:37.015 13:51:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 64820 ']' 00:09:37.015 13:51:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.015 13:51:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:37.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.015 13:51:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.015 13:51:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:37.015 13:51:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:37.015 [2024-07-15 13:51:01.144330] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:37.015 [2024-07-15 13:51:01.144481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64820 ] 00:09:37.015 [2024-07-15 13:51:01.321496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:37.015 [2024-07-15 13:51:01.519114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.015 [2024-07-15 13:51:01.519266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.015 [2024-07-15 13:51:01.519282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:37.947 13:51:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:37.947 13:51:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:09:37.947 13:51:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=64838 00:09:37.947 13:51:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:37.947 13:51:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 64838 /var/tmp/spdk2.sock 00:09:37.947 13:51:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:09:37.947 13:51:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64838 /var/tmp/spdk2.sock 00:09:37.947 13:51:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:09:37.947 13:51:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:37.947 13:51:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:09:37.947 13:51:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:37.947 13:51:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 64838 /var/tmp/spdk2.sock 00:09:37.947 13:51:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 64838 ']' 00:09:37.947 13:51:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:37.947 13:51:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:37.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:37.947 13:51:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:37.947 13:51:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:37.947 13:51:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:37.947 [2024-07-15 13:51:02.356917] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:37.947 [2024-07-15 13:51:02.357095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64838 ] 00:09:38.206 [2024-07-15 13:51:02.538692] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64820 has claimed it. 00:09:38.206 [2024-07-15 13:51:02.538772] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:38.801 ERROR: process (pid: 64838) is no longer running 00:09:38.801 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64838) - No such process 00:09:38.801 13:51:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:38.801 13:51:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:09:38.801 13:51:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:09:38.801 13:51:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:38.801 13:51:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:38.801 13:51:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:38.801 13:51:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:38.801 13:51:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:38.801 13:51:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:38.801 13:51:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:38.801 13:51:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 64820 00:09:38.801 13:51:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 64820 ']' 00:09:38.801 13:51:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 64820 00:09:38.801 13:51:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:09:38.801 13:51:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:38.801 13:51:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64820 00:09:38.801 13:51:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:38.801 13:51:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:38.801 killing process with pid 64820 00:09:38.801 13:51:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64820' 00:09:38.801 13:51:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 64820 00:09:38.801 13:51:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 64820 00:09:41.327 00:09:41.327 real 0m4.334s 00:09:41.327 user 0m11.553s 00:09:41.327 sys 0m0.498s 00:09:41.327 13:51:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:41.327 ************************************ 00:09:41.327 END TEST locking_overlapped_coremask 00:09:41.327 ************************************ 00:09:41.327 13:51:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:41.327 13:51:05 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:09:41.327 13:51:05 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:41.327 13:51:05 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:41.327 13:51:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:41.327 13:51:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:41.327 ************************************ 00:09:41.327 START TEST locking_overlapped_coremask_via_rpc 00:09:41.327 ************************************ 00:09:41.327 13:51:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:09:41.327 13:51:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=64902 00:09:41.327 13:51:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:41.327 13:51:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 64902 /var/tmp/spdk.sock 00:09:41.327 13:51:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64902 ']' 00:09:41.327 13:51:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.327 13:51:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:41.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.327 13:51:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.327 13:51:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:41.327 13:51:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.327 [2024-07-15 13:51:05.513172] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:41.327 [2024-07-15 13:51:05.513356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64902 ] 00:09:41.327 [2024-07-15 13:51:05.675785] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:41.327 [2024-07-15 13:51:05.675888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:41.585 [2024-07-15 13:51:05.878786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.585 [2024-07-15 13:51:05.878866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.585 [2024-07-15 13:51:05.878892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.150 13:51:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:42.150 13:51:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:42.150 13:51:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=64920 00:09:42.150 13:51:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 64920 /var/tmp/spdk2.sock 00:09:42.150 13:51:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:42.150 13:51:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64920 ']' 00:09:42.150 13:51:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:42.150 13:51:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:42.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:42.150 13:51:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:42.150 13:51:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:42.150 13:51:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.408 [2024-07-15 13:51:06.785847] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:42.408 [2024-07-15 13:51:06.786121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64920 ] 00:09:42.667 [2024-07-15 13:51:06.978635] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:42.667 [2024-07-15 13:51:06.978705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:42.925 [2024-07-15 13:51:07.362173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:42.925 [2024-07-15 13:51:07.365376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.925 [2024-07-15 13:51:07.365385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.455 [2024-07-15 13:51:09.656552] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64902 has claimed it. 00:09:45.455 request: 00:09:45.455 { 00:09:45.455 "method": "framework_enable_cpumask_locks", 00:09:45.455 "req_id": 1 00:09:45.455 } 00:09:45.455 Got JSON-RPC error response 00:09:45.455 response: 00:09:45.455 { 00:09:45.455 "code": -32603, 00:09:45.455 "message": "Failed to claim CPU core: 2" 00:09:45.455 } 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 64902 /var/tmp/spdk.sock 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64902 ']' 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:45.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 64920 /var/tmp/spdk2.sock 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64920 ']' 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:45.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:45.455 13:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.713 13:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:45.713 13:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:45.713 13:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:45.713 13:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:45.713 13:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:45.713 13:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:45.713 00:09:45.713 real 0m4.827s 00:09:45.713 user 0m1.813s 00:09:45.713 sys 0m0.236s 00:09:45.713 13:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:45.713 ************************************ 00:09:45.713 END TEST locking_overlapped_coremask_via_rpc 00:09:45.713 13:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.713 ************************************ 00:09:45.971 13:51:10 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:09:45.971 13:51:10 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:45.971 13:51:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64902 ]] 00:09:45.971 13:51:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64902 00:09:45.971 13:51:10 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64902 ']' 00:09:45.971 13:51:10 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64902 00:09:45.971 13:51:10 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:09:45.971 13:51:10 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:45.971 13:51:10 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64902 00:09:45.971 13:51:10 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:45.971 13:51:10 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:45.971 killing process with pid 64902 00:09:45.971 13:51:10 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64902' 00:09:45.971 13:51:10 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 64902 00:09:45.971 13:51:10 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 64902 00:09:48.499 13:51:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64920 ]] 00:09:48.499 13:51:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64920 00:09:48.499 13:51:12 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64920 ']' 00:09:48.499 13:51:12 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64920 00:09:48.499 13:51:12 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:09:48.499 13:51:12 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:48.499 13:51:12 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64920 00:09:48.499 13:51:12 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:09:48.499 13:51:12 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:09:48.499 killing process with pid 64920 00:09:48.499 13:51:12 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64920' 00:09:48.499 13:51:12 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 64920 00:09:48.499 13:51:12 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 64920 00:09:50.397 13:51:14 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:50.397 13:51:14 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:50.397 13:51:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64902 ]] 00:09:50.397 13:51:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64902 00:09:50.397 13:51:14 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64902 ']' 00:09:50.397 13:51:14 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64902 00:09:50.397 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (64902) - No such process 00:09:50.397 13:51:14 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 64902 is not found' 00:09:50.397 Process with pid 64902 is not found 00:09:50.397 13:51:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64920 ]] 00:09:50.397 13:51:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64920 00:09:50.397 13:51:14 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64920 ']' 00:09:50.397 13:51:14 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64920 00:09:50.397 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (64920) - No such process 00:09:50.397 Process with pid 64920 is not found 00:09:50.397 13:51:14 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 64920 is not found' 00:09:50.397 13:51:14 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:50.397 00:09:50.397 real 0m48.885s 00:09:50.397 user 1m25.335s 00:09:50.397 sys 0m6.067s 00:09:50.397 13:51:14 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:50.397 13:51:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:50.397 ************************************ 00:09:50.397 END TEST cpu_locks 00:09:50.397 ************************************ 00:09:50.397 13:51:14 event -- common/autotest_common.sh@1142 -- # return 0 00:09:50.397 00:09:50.397 real 1m20.318s 00:09:50.397 user 2m28.153s 00:09:50.397 sys 0m9.843s 00:09:50.397 13:51:14 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:50.397 13:51:14 event -- common/autotest_common.sh@10 -- # set +x 00:09:50.397 ************************************ 00:09:50.397 END TEST event 00:09:50.397 ************************************ 00:09:50.397 13:51:14 -- common/autotest_common.sh@1142 -- # return 0 00:09:50.397 13:51:14 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:50.397 13:51:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:50.397 13:51:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:50.397 13:51:14 -- common/autotest_common.sh@10 -- # set +x 00:09:50.398 ************************************ 00:09:50.398 START TEST thread 00:09:50.398 ************************************ 00:09:50.398 13:51:14 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:50.398 * Looking for test storage... 00:09:50.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:50.398 13:51:14 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:50.398 13:51:14 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:09:50.398 13:51:14 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:50.398 13:51:14 thread -- common/autotest_common.sh@10 -- # set +x 00:09:50.398 ************************************ 00:09:50.398 START TEST thread_poller_perf 00:09:50.398 ************************************ 00:09:50.398 13:51:14 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:50.398 [2024-07-15 13:51:14.838781] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:50.398 [2024-07-15 13:51:14.838933] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65107 ] 00:09:50.656 [2024-07-15 13:51:15.006105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.914 [2024-07-15 13:51:15.241909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.914 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:52.288 ====================================== 00:09:52.288 busy:2214204817 (cyc) 00:09:52.288 total_run_count: 292000 00:09:52.288 tsc_hz: 2200000000 (cyc) 00:09:52.288 ====================================== 00:09:52.288 poller_cost: 7582 (cyc), 3446 (nsec) 00:09:52.288 00:09:52.288 real 0m1.843s 00:09:52.288 user 0m1.639s 00:09:52.288 sys 0m0.093s 00:09:52.288 13:51:16 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:52.288 13:51:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:52.288 ************************************ 00:09:52.288 END TEST thread_poller_perf 00:09:52.288 ************************************ 00:09:52.288 13:51:16 thread -- common/autotest_common.sh@1142 -- # return 0 00:09:52.288 13:51:16 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:52.289 13:51:16 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:09:52.289 13:51:16 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:52.289 13:51:16 thread -- common/autotest_common.sh@10 -- # set +x 00:09:52.289 ************************************ 00:09:52.289 START TEST thread_poller_perf 00:09:52.289 ************************************ 00:09:52.289 13:51:16 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:52.289 [2024-07-15 13:51:16.727629] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:52.289 [2024-07-15 13:51:16.727779] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65148 ] 00:09:52.546 [2024-07-15 13:51:16.913143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.804 [2024-07-15 13:51:17.116767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.804 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:54.174 ====================================== 00:09:54.174 busy:2204178620 (cyc) 00:09:54.174 total_run_count: 3654000 00:09:54.174 tsc_hz: 2200000000 (cyc) 00:09:54.174 ====================================== 00:09:54.174 poller_cost: 603 (cyc), 274 (nsec) 00:09:54.174 00:09:54.174 real 0m1.829s 00:09:54.174 user 0m1.626s 00:09:54.174 sys 0m0.092s 00:09:54.174 13:51:18 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:54.174 ************************************ 00:09:54.174 END TEST thread_poller_perf 00:09:54.174 13:51:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:54.174 ************************************ 00:09:54.174 13:51:18 thread -- common/autotest_common.sh@1142 -- # return 0 00:09:54.174 13:51:18 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:54.174 00:09:54.174 real 0m3.842s 00:09:54.174 user 0m3.332s 00:09:54.174 sys 0m0.286s 00:09:54.174 13:51:18 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:54.174 13:51:18 thread -- common/autotest_common.sh@10 -- # set +x 00:09:54.174 ************************************ 00:09:54.174 END TEST thread 00:09:54.174 ************************************ 00:09:54.174 13:51:18 -- common/autotest_common.sh@1142 -- # return 0 00:09:54.174 13:51:18 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:09:54.174 13:51:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:54.174 13:51:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:54.174 13:51:18 -- common/autotest_common.sh@10 -- # set +x 00:09:54.174 ************************************ 00:09:54.174 START TEST accel 00:09:54.174 ************************************ 00:09:54.174 13:51:18 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:09:54.174 * Looking for test storage... 00:09:54.174 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:09:54.174 13:51:18 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:09:54.174 13:51:18 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:09:54.174 13:51:18 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:54.174 13:51:18 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=65225 00:09:54.174 13:51:18 accel -- accel/accel.sh@63 -- # waitforlisten 65225 00:09:54.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.174 13:51:18 accel -- common/autotest_common.sh@829 -- # '[' -z 65225 ']' 00:09:54.174 13:51:18 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.174 13:51:18 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:54.174 13:51:18 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.174 13:51:18 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:54.174 13:51:18 accel -- common/autotest_common.sh@10 -- # set +x 00:09:54.174 13:51:18 accel -- accel/accel.sh@61 -- # build_accel_config 00:09:54.174 13:51:18 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:09:54.174 13:51:18 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:54.174 13:51:18 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:54.174 13:51:18 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:54.174 13:51:18 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:54.174 13:51:18 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:54.174 13:51:18 accel -- accel/accel.sh@40 -- # local IFS=, 00:09:54.174 13:51:18 accel -- accel/accel.sh@41 -- # jq -r . 00:09:54.430 [2024-07-15 13:51:18.765734] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:54.430 [2024-07-15 13:51:18.765880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65225 ] 00:09:54.430 [2024-07-15 13:51:18.949607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.687 [2024-07-15 13:51:19.138464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.619 13:51:19 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:55.619 13:51:19 accel -- common/autotest_common.sh@862 -- # return 0 00:09:55.619 13:51:19 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:09:55.619 13:51:19 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:09:55.619 13:51:19 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:09:55.619 13:51:19 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:09:55.619 13:51:19 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:09:55.619 13:51:19 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:09:55.619 13:51:19 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.619 13:51:19 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:09:55.619 13:51:19 accel -- common/autotest_common.sh@10 -- # set +x 00:09:55.619 13:51:19 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.619 13:51:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:55.619 13:51:19 accel -- accel/accel.sh@72 -- # IFS== 00:09:55.619 13:51:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:55.619 13:51:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:55.619 13:51:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:55.619 13:51:19 accel -- accel/accel.sh@72 -- # IFS== 00:09:55.619 13:51:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:55.619 13:51:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:55.619 13:51:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:55.619 13:51:19 accel -- accel/accel.sh@72 -- # IFS== 00:09:55.619 13:51:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:55.619 13:51:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:55.619 13:51:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:55.619 13:51:19 accel -- accel/accel.sh@72 -- # IFS== 00:09:55.619 13:51:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:55.619 13:51:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:55.619 13:51:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:55.619 13:51:19 accel -- accel/accel.sh@72 -- # IFS== 00:09:55.619 13:51:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:55.619 13:51:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:55.619 13:51:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:55.619 13:51:19 accel -- accel/accel.sh@72 -- # IFS== 00:09:55.619 13:51:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:55.619 13:51:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:55.619 13:51:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:55.619 13:51:19 accel -- accel/accel.sh@72 -- # IFS== 00:09:55.619 13:51:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:55.619 13:51:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:55.619 13:51:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:55.619 13:51:19 accel -- accel/accel.sh@72 -- # IFS== 00:09:55.619 13:51:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:55.619 13:51:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:55.619 13:51:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:55.619 13:51:19 accel -- accel/accel.sh@72 -- # IFS== 00:09:55.619 13:51:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:55.619 13:51:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:55.619 13:51:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:55.619 13:51:19 accel -- accel/accel.sh@72 -- # IFS== 00:09:55.619 13:51:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:55.619 13:51:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:55.619 13:51:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:55.619 13:51:19 accel -- accel/accel.sh@72 -- # IFS== 00:09:55.619 13:51:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:55.619 13:51:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:55.619 13:51:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:55.619 13:51:19 accel -- accel/accel.sh@72 -- # IFS== 00:09:55.619 13:51:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:55.619 13:51:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:55.619 13:51:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:55.619 13:51:19 accel -- accel/accel.sh@72 -- # IFS== 00:09:55.619 13:51:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:55.619 13:51:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:55.619 13:51:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:55.620 13:51:19 accel -- accel/accel.sh@72 -- # IFS== 00:09:55.620 13:51:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:55.620 13:51:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:55.620 13:51:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:55.620 13:51:19 accel -- accel/accel.sh@72 -- # IFS== 00:09:55.620 13:51:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:55.620 13:51:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:55.620 13:51:19 accel -- accel/accel.sh@75 -- # killprocess 65225 00:09:55.620 13:51:19 accel -- common/autotest_common.sh@948 -- # '[' -z 65225 ']' 00:09:55.620 13:51:19 accel -- common/autotest_common.sh@952 -- # kill -0 65225 00:09:55.620 13:51:19 accel -- common/autotest_common.sh@953 -- # uname 00:09:55.620 13:51:19 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:55.620 13:51:19 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65225 00:09:55.620 killing process with pid 65225 00:09:55.620 13:51:19 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:55.620 13:51:19 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:55.620 13:51:19 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65225' 00:09:55.620 13:51:19 accel -- common/autotest_common.sh@967 -- # kill 65225 00:09:55.620 13:51:19 accel -- common/autotest_common.sh@972 -- # wait 65225 00:09:58.211 13:51:22 accel -- accel/accel.sh@76 -- # trap - ERR 00:09:58.211 13:51:22 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:09:58.211 13:51:22 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:58.211 13:51:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:58.211 13:51:22 accel -- common/autotest_common.sh@10 -- # set +x 00:09:58.211 13:51:22 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:09:58.211 13:51:22 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:09:58.211 13:51:22 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:09:58.211 13:51:22 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:58.211 13:51:22 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:58.211 13:51:22 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:58.211 13:51:22 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:58.211 13:51:22 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:58.211 13:51:22 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:09:58.211 13:51:22 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:09:58.211 13:51:22 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:58.211 13:51:22 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:09:58.211 13:51:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:58.211 13:51:22 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:09:58.211 13:51:22 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:58.211 13:51:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:58.211 13:51:22 accel -- common/autotest_common.sh@10 -- # set +x 00:09:58.211 ************************************ 00:09:58.211 START TEST accel_missing_filename 00:09:58.211 ************************************ 00:09:58.211 13:51:22 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:09:58.211 13:51:22 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:09:58.211 13:51:22 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:09:58.211 13:51:22 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:09:58.211 13:51:22 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:58.211 13:51:22 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:09:58.211 13:51:22 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:58.211 13:51:22 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:09:58.211 13:51:22 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:09:58.211 13:51:22 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:09:58.211 13:51:22 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:58.211 13:51:22 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:58.211 13:51:22 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:58.211 13:51:22 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:58.211 13:51:22 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:58.211 13:51:22 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:09:58.211 13:51:22 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:09:58.211 [2024-07-15 13:51:22.277867] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:58.211 [2024-07-15 13:51:22.278048] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65306 ] 00:09:58.211 [2024-07-15 13:51:22.461657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.211 [2024-07-15 13:51:22.675461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.469 [2024-07-15 13:51:22.855407] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:59.033 [2024-07-15 13:51:23.309231] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:09:59.290 A filename is required. 00:09:59.290 13:51:23 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:09:59.290 13:51:23 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:59.290 13:51:23 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:09:59.290 13:51:23 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:09:59.290 13:51:23 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:09:59.290 13:51:23 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:59.290 00:09:59.290 real 0m1.473s 00:09:59.290 user 0m1.261s 00:09:59.290 sys 0m0.142s 00:09:59.290 13:51:23 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:59.290 ************************************ 00:09:59.290 END TEST accel_missing_filename 00:09:59.290 ************************************ 00:09:59.290 13:51:23 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:09:59.290 13:51:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:59.290 13:51:23 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:59.290 13:51:23 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:09:59.290 13:51:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:59.290 13:51:23 accel -- common/autotest_common.sh@10 -- # set +x 00:09:59.290 ************************************ 00:09:59.290 START TEST accel_compress_verify 00:09:59.290 ************************************ 00:09:59.290 13:51:23 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:59.290 13:51:23 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:09:59.290 13:51:23 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:59.290 13:51:23 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:09:59.290 13:51:23 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:59.290 13:51:23 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:09:59.290 13:51:23 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:59.290 13:51:23 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:59.290 13:51:23 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:59.290 13:51:23 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:09:59.290 13:51:23 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:59.290 13:51:23 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:59.290 13:51:23 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:59.290 13:51:23 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:59.290 13:51:23 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:59.290 13:51:23 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:09:59.290 13:51:23 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:09:59.290 [2024-07-15 13:51:23.793959] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:59.290 [2024-07-15 13:51:23.794132] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65337 ] 00:09:59.547 [2024-07-15 13:51:23.963779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.804 [2024-07-15 13:51:24.188622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.062 [2024-07-15 13:51:24.369702] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:00.320 [2024-07-15 13:51:24.821471] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:10:00.886 00:10:00.886 Compression does not support the verify option, aborting. 00:10:00.886 13:51:25 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:10:00.886 13:51:25 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:00.886 13:51:25 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:10:00.886 13:51:25 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:10:00.886 13:51:25 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:10:00.886 13:51:25 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:00.886 00:10:00.886 real 0m1.464s 00:10:00.886 user 0m1.250s 00:10:00.886 sys 0m0.153s 00:10:00.886 ************************************ 00:10:00.886 13:51:25 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:00.886 13:51:25 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:10:00.886 END TEST accel_compress_verify 00:10:00.886 ************************************ 00:10:00.886 13:51:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:00.886 13:51:25 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:10:00.886 13:51:25 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:00.886 13:51:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:00.886 13:51:25 accel -- common/autotest_common.sh@10 -- # set +x 00:10:00.886 ************************************ 00:10:00.886 START TEST accel_wrong_workload 00:10:00.887 ************************************ 00:10:00.887 13:51:25 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:10:00.887 13:51:25 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:10:00.887 13:51:25 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:10:00.887 13:51:25 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:10:00.887 13:51:25 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:00.887 13:51:25 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:10:00.887 13:51:25 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:00.887 13:51:25 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:10:00.887 13:51:25 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:10:00.887 13:51:25 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:10:00.887 13:51:25 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:00.887 13:51:25 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:00.887 13:51:25 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:00.887 13:51:25 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:00.887 13:51:25 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:00.887 13:51:25 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:10:00.887 13:51:25 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:10:00.887 Unsupported workload type: foobar 00:10:00.887 [2024-07-15 13:51:25.287233] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:10:00.887 accel_perf options: 00:10:00.887 [-h help message] 00:10:00.887 [-q queue depth per core] 00:10:00.887 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:00.887 [-T number of threads per core 00:10:00.887 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:00.887 [-t time in seconds] 00:10:00.887 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:00.887 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:10:00.887 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:00.887 [-l for compress/decompress workloads, name of uncompressed input file 00:10:00.887 [-S for crc32c workload, use this seed value (default 0) 00:10:00.887 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:00.887 [-f for fill workload, use this BYTE value (default 255) 00:10:00.887 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:00.887 [-y verify result if this switch is on] 00:10:00.887 [-a tasks to allocate per core (default: same value as -q)] 00:10:00.887 Can be used to spread operations across a wider range of memory. 00:10:00.887 13:51:25 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:10:00.887 13:51:25 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:00.887 13:51:25 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:00.887 13:51:25 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:00.887 00:10:00.887 real 0m0.063s 00:10:00.887 user 0m0.071s 00:10:00.887 sys 0m0.030s 00:10:00.887 13:51:25 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:00.887 13:51:25 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:10:00.887 ************************************ 00:10:00.887 END TEST accel_wrong_workload 00:10:00.887 ************************************ 00:10:00.887 13:51:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:00.887 13:51:25 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:10:00.887 13:51:25 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:10:00.887 13:51:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:00.887 13:51:25 accel -- common/autotest_common.sh@10 -- # set +x 00:10:00.887 ************************************ 00:10:00.887 START TEST accel_negative_buffers 00:10:00.887 ************************************ 00:10:00.887 13:51:25 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:10:00.887 13:51:25 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:10:00.887 13:51:25 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:10:00.887 13:51:25 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:10:00.887 13:51:25 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:00.887 13:51:25 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:10:00.887 13:51:25 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:00.887 13:51:25 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:10:00.887 13:51:25 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:10:00.887 13:51:25 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:10:00.887 13:51:25 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:00.887 13:51:25 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:00.887 13:51:25 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:00.887 13:51:25 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:00.887 13:51:25 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:00.887 13:51:25 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:10:00.887 13:51:25 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:10:00.887 -x option must be non-negative. 00:10:00.887 [2024-07-15 13:51:25.393962] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:10:00.887 accel_perf options: 00:10:00.887 [-h help message] 00:10:00.887 [-q queue depth per core] 00:10:00.887 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:00.887 [-T number of threads per core 00:10:00.887 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:00.887 [-t time in seconds] 00:10:00.887 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:00.887 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:10:00.887 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:00.887 [-l for compress/decompress workloads, name of uncompressed input file 00:10:00.887 [-S for crc32c workload, use this seed value (default 0) 00:10:00.887 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:00.887 [-f for fill workload, use this BYTE value (default 255) 00:10:00.887 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:00.887 [-y verify result if this switch is on] 00:10:00.887 [-a tasks to allocate per core (default: same value as -q)] 00:10:00.887 Can be used to spread operations across a wider range of memory. 00:10:00.887 13:51:25 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:10:00.887 13:51:25 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:00.888 13:51:25 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:00.888 13:51:25 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:00.888 00:10:00.888 real 0m0.066s 00:10:00.888 user 0m0.075s 00:10:00.888 sys 0m0.032s 00:10:00.888 13:51:25 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:00.888 13:51:25 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:10:00.888 ************************************ 00:10:00.888 END TEST accel_negative_buffers 00:10:00.888 ************************************ 00:10:01.146 13:51:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:01.147 13:51:25 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:10:01.147 13:51:25 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:10:01.147 13:51:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:01.147 13:51:25 accel -- common/autotest_common.sh@10 -- # set +x 00:10:01.147 ************************************ 00:10:01.147 START TEST accel_crc32c 00:10:01.147 ************************************ 00:10:01.147 13:51:25 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:10:01.147 13:51:25 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:10:01.147 13:51:25 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:10:01.147 13:51:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:01.147 13:51:25 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:01.147 13:51:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:01.147 13:51:25 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:01.147 13:51:25 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:10:01.147 13:51:25 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:01.147 13:51:25 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:01.147 13:51:25 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:01.147 13:51:25 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:01.147 13:51:25 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:01.147 13:51:25 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:10:01.147 13:51:25 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:10:01.147 [2024-07-15 13:51:25.509543] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:01.147 [2024-07-15 13:51:25.509705] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65415 ] 00:10:01.147 [2024-07-15 13:51:25.680667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.405 [2024-07-15 13:51:25.925651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:01.664 13:51:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:03.563 13:51:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:03.563 13:51:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:03.563 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:03.563 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:03.563 13:51:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:03.563 13:51:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:03.563 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:03.563 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:03.563 13:51:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:03.563 13:51:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:03.563 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:03.563 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:03.563 13:51:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:03.563 13:51:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:03.563 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:03.563 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:03.563 13:51:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:03.563 13:51:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:03.563 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:03.563 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:03.563 13:51:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:03.563 13:51:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:03.563 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:03.563 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:03.563 13:51:27 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:03.563 13:51:27 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:10:03.563 13:51:27 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:03.563 00:10:03.563 real 0m2.538s 00:10:03.563 user 0m0.014s 00:10:03.563 sys 0m0.005s 00:10:03.563 13:51:27 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:03.563 13:51:27 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:10:03.563 ************************************ 00:10:03.563 END TEST accel_crc32c 00:10:03.563 ************************************ 00:10:03.563 13:51:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:03.563 13:51:28 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:10:03.563 13:51:28 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:10:03.563 13:51:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:03.563 13:51:28 accel -- common/autotest_common.sh@10 -- # set +x 00:10:03.563 ************************************ 00:10:03.563 START TEST accel_crc32c_C2 00:10:03.563 ************************************ 00:10:03.563 13:51:28 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:10:03.563 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:10:03.563 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:10:03.563 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:03.563 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:03.563 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:03.563 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:03.563 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:10:03.563 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:03.563 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:03.563 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:03.563 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:03.563 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:03.563 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:10:03.563 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:10:03.563 [2024-07-15 13:51:28.091144] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:03.563 [2024-07-15 13:51:28.091341] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65456 ] 00:10:03.822 [2024-07-15 13:51:28.263968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.081 [2024-07-15 13:51:28.465770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.340 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:04.341 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:04.341 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:04.341 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.341 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:04.341 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:04.341 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:10:04.341 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.341 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:04.341 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:04.341 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:04.341 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.341 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:04.341 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:04.341 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:04.341 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.341 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:04.341 13:51:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:06.242 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:06.242 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:06.242 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:06.242 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:06.242 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:06.242 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:06.242 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:06.242 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:06.242 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:06.242 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:06.242 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:06.242 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:06.242 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:06.242 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:06.242 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:06.242 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:06.242 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:06.242 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:06.242 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:06.242 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:06.242 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:06.242 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:06.242 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:06.242 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:06.242 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:06.242 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:10:06.242 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:06.242 00:10:06.242 real 0m2.483s 00:10:06.242 user 0m2.235s 00:10:06.242 sys 0m0.144s 00:10:06.242 13:51:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:06.242 13:51:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:10:06.242 ************************************ 00:10:06.242 END TEST accel_crc32c_C2 00:10:06.242 ************************************ 00:10:06.242 13:51:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:06.242 13:51:30 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:10:06.242 13:51:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:06.242 13:51:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:06.242 13:51:30 accel -- common/autotest_common.sh@10 -- # set +x 00:10:06.242 ************************************ 00:10:06.242 START TEST accel_copy 00:10:06.242 ************************************ 00:10:06.242 13:51:30 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:10:06.242 13:51:30 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:10:06.242 13:51:30 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:10:06.242 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:06.242 13:51:30 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:10:06.242 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:06.242 13:51:30 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:06.242 13:51:30 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:10:06.242 13:51:30 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:06.242 13:51:30 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:06.242 13:51:30 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:06.243 13:51:30 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:06.243 13:51:30 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:06.243 13:51:30 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:10:06.243 13:51:30 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:10:06.243 [2024-07-15 13:51:30.614892] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:06.243 [2024-07-15 13:51:30.615085] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65508 ] 00:10:06.501 [2024-07-15 13:51:30.811832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.501 [2024-07-15 13:51:31.037944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:06.758 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:08.678 13:51:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:08.678 13:51:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:08.678 13:51:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:08.678 13:51:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:08.678 13:51:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:08.678 13:51:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:08.678 13:51:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:08.678 13:51:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:08.678 13:51:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:08.678 13:51:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:08.678 13:51:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:08.678 13:51:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:08.678 13:51:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:08.678 13:51:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:08.678 13:51:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:08.678 13:51:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:08.678 13:51:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:08.678 13:51:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:08.678 13:51:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:08.678 13:51:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:08.678 13:51:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:08.678 13:51:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:08.678 13:51:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:08.678 13:51:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:08.678 13:51:33 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:08.678 13:51:33 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:10:08.678 13:51:33 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:08.678 00:10:08.678 real 0m2.526s 00:10:08.678 user 0m2.286s 00:10:08.678 sys 0m0.137s 00:10:08.678 13:51:33 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:08.678 13:51:33 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:10:08.678 ************************************ 00:10:08.678 END TEST accel_copy 00:10:08.678 ************************************ 00:10:08.678 13:51:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:08.678 13:51:33 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:08.678 13:51:33 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:08.678 13:51:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:08.678 13:51:33 accel -- common/autotest_common.sh@10 -- # set +x 00:10:08.678 ************************************ 00:10:08.678 START TEST accel_fill 00:10:08.678 ************************************ 00:10:08.678 13:51:33 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:08.678 13:51:33 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:10:08.678 13:51:33 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:10:08.678 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:08.678 13:51:33 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:08.678 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:08.678 13:51:33 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:08.678 13:51:33 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:10:08.678 13:51:33 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:08.678 13:51:33 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:08.678 13:51:33 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:08.678 13:51:33 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:08.678 13:51:33 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:08.678 13:51:33 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:10:08.678 13:51:33 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:10:08.678 [2024-07-15 13:51:33.190746] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:08.678 [2024-07-15 13:51:33.190966] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65549 ] 00:10:08.936 [2024-07-15 13:51:33.371281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.193 [2024-07-15 13:51:33.557245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:09.451 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:09.452 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:11.351 13:51:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:11.351 13:51:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:11.351 13:51:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:11.352 13:51:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:11.352 13:51:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:11.352 13:51:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:11.352 13:51:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:11.352 13:51:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:11.352 13:51:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:11.352 13:51:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:11.352 13:51:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:11.352 13:51:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:11.352 13:51:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:11.352 13:51:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:11.352 13:51:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:11.352 13:51:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:11.352 13:51:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:11.352 13:51:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:11.352 13:51:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:11.352 13:51:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:11.352 13:51:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:11.352 13:51:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:11.352 13:51:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:11.352 13:51:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:11.352 13:51:35 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:11.352 13:51:35 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:10:11.352 13:51:35 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:11.352 00:10:11.352 real 0m2.465s 00:10:11.352 user 0m0.014s 00:10:11.352 sys 0m0.005s 00:10:11.352 13:51:35 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:11.352 13:51:35 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:10:11.352 ************************************ 00:10:11.352 END TEST accel_fill 00:10:11.352 ************************************ 00:10:11.352 13:51:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:11.352 13:51:35 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:10:11.352 13:51:35 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:11.352 13:51:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.352 13:51:35 accel -- common/autotest_common.sh@10 -- # set +x 00:10:11.352 ************************************ 00:10:11.352 START TEST accel_copy_crc32c 00:10:11.352 ************************************ 00:10:11.352 13:51:35 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:10:11.352 13:51:35 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:10:11.352 13:51:35 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:10:11.352 13:51:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.352 13:51:35 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:11.352 13:51:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.352 13:51:35 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:11.352 13:51:35 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:10:11.352 13:51:35 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:11.352 13:51:35 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:11.352 13:51:35 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:11.352 13:51:35 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:11.352 13:51:35 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:11.352 13:51:35 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:10:11.352 13:51:35 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:10:11.352 [2024-07-15 13:51:35.713770] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:11.352 [2024-07-15 13:51:35.714040] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65596 ] 00:10:11.352 [2024-07-15 13:51:35.891947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.609 [2024-07-15 13:51:36.121317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.866 13:51:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:13.763 13:51:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:13.763 13:51:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:13.763 13:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:13.763 13:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:13.763 13:51:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:13.763 13:51:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:13.763 13:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:13.763 13:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:13.763 13:51:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:13.763 13:51:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:13.763 13:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:13.763 13:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:13.764 13:51:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:13.764 13:51:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:13.764 13:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:13.764 13:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:13.764 13:51:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:13.764 13:51:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:13.764 13:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:13.764 13:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:13.764 13:51:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:13.764 13:51:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:13.764 13:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:13.764 13:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:13.764 13:51:38 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:13.764 13:51:38 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:10:13.764 13:51:38 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:13.764 00:10:13.764 real 0m2.522s 00:10:13.764 user 0m0.018s 00:10:13.764 sys 0m0.003s 00:10:13.764 13:51:38 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:13.764 ************************************ 00:10:13.764 END TEST accel_copy_crc32c 00:10:13.764 ************************************ 00:10:13.764 13:51:38 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:10:13.764 13:51:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:13.764 13:51:38 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:10:13.764 13:51:38 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:10:13.764 13:51:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:13.764 13:51:38 accel -- common/autotest_common.sh@10 -- # set +x 00:10:13.764 ************************************ 00:10:13.764 START TEST accel_copy_crc32c_C2 00:10:13.764 ************************************ 00:10:13.764 13:51:38 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:10:13.764 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:10:13.764 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:10:13.764 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:13.764 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:13.764 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:13.764 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:13.764 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:10:13.764 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:13.764 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:13.764 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:13.764 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:13.764 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:13.764 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:10:13.764 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:10:13.764 [2024-07-15 13:51:38.261496] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:13.764 [2024-07-15 13:51:38.261725] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65642 ] 00:10:14.021 [2024-07-15 13:51:38.433753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.278 [2024-07-15 13:51:38.620491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:14.278 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:14.279 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:14.279 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:14.279 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:14.279 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:14.279 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:14.279 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:14.279 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:14.279 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:14.279 13:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:16.175 13:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:16.175 13:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:16.175 13:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:16.175 13:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:16.175 13:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:16.175 13:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:16.175 13:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:16.175 13:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:16.175 13:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:16.175 13:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:16.175 13:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:16.175 13:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:16.175 13:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:16.175 13:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:16.175 13:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:16.175 13:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:16.175 13:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:16.175 13:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:16.175 13:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:16.175 13:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:16.175 13:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:16.175 13:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:16.175 13:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:16.175 13:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:16.175 ************************************ 00:10:16.175 END TEST accel_copy_crc32c_C2 00:10:16.175 ************************************ 00:10:16.175 13:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:16.175 13:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:10:16.175 13:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:16.175 00:10:16.175 real 0m2.450s 00:10:16.175 user 0m0.012s 00:10:16.175 sys 0m0.003s 00:10:16.175 13:51:40 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:16.175 13:51:40 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:10:16.175 13:51:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:16.175 13:51:40 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:10:16.175 13:51:40 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:16.175 13:51:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.175 13:51:40 accel -- common/autotest_common.sh@10 -- # set +x 00:10:16.175 ************************************ 00:10:16.175 START TEST accel_dualcast 00:10:16.175 ************************************ 00:10:16.175 13:51:40 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:10:16.175 13:51:40 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:10:16.175 13:51:40 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:10:16.175 13:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:16.175 13:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:16.175 13:51:40 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:10:16.175 13:51:40 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:16.175 13:51:40 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:10:16.175 13:51:40 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:16.175 13:51:40 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:16.175 13:51:40 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:16.175 13:51:40 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:16.175 13:51:40 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:16.175 13:51:40 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:10:16.175 13:51:40 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:10:16.432 [2024-07-15 13:51:40.750944] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:16.432 [2024-07-15 13:51:40.751481] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65689 ] 00:10:16.432 [2024-07-15 13:51:40.913649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.689 [2024-07-15 13:51:41.102056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:16.946 13:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:10:16.947 13:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:16.947 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:16.947 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:16.947 13:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:10:16.947 13:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:16.947 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:16.947 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:16.947 13:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:10:16.947 13:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:16.947 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:16.947 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:16.947 13:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:16.947 13:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:16.947 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:16.947 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:16.947 13:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:16.947 13:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:16.947 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:16.947 13:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:18.840 13:51:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:18.841 13:51:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:18.841 13:51:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:18.841 13:51:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:18.841 13:51:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:18.841 13:51:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:18.841 13:51:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:18.841 13:51:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:18.841 13:51:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:18.841 13:51:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:18.841 13:51:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:18.841 13:51:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:18.841 13:51:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:18.841 13:51:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:18.841 13:51:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:18.841 13:51:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:18.841 13:51:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:18.841 13:51:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:18.841 13:51:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:18.841 13:51:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:18.841 13:51:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:18.841 13:51:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:18.841 13:51:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:18.841 13:51:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:18.841 ************************************ 00:10:18.841 END TEST accel_dualcast 00:10:18.841 ************************************ 00:10:18.841 13:51:43 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:18.841 13:51:43 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:10:18.841 13:51:43 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:18.841 00:10:18.841 real 0m2.440s 00:10:18.841 user 0m0.014s 00:10:18.841 sys 0m0.002s 00:10:18.841 13:51:43 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:18.841 13:51:43 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:10:18.841 13:51:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:18.841 13:51:43 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:10:18.841 13:51:43 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:18.841 13:51:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:18.841 13:51:43 accel -- common/autotest_common.sh@10 -- # set +x 00:10:18.841 ************************************ 00:10:18.841 START TEST accel_compare 00:10:18.841 ************************************ 00:10:18.841 13:51:43 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:10:18.841 13:51:43 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:10:18.841 13:51:43 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:10:18.841 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:18.841 13:51:43 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:10:18.841 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:18.841 13:51:43 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:18.841 13:51:43 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:10:18.841 13:51:43 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:18.841 13:51:43 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:18.841 13:51:43 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:18.841 13:51:43 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:18.841 13:51:43 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:18.841 13:51:43 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:10:18.841 13:51:43 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:10:18.841 [2024-07-15 13:51:43.237770] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:18.841 [2024-07-15 13:51:43.237961] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65734 ] 00:10:19.098 [2024-07-15 13:51:43.418124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.356 [2024-07-15 13:51:43.648649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:19.356 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:19.357 13:51:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:19.357 13:51:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:19.357 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:19.357 13:51:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:21.257 13:51:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:21.257 13:51:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:21.257 13:51:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:21.257 13:51:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:21.257 13:51:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:21.257 13:51:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:21.257 13:51:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:21.257 13:51:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:21.257 13:51:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:21.257 13:51:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:21.257 13:51:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:21.257 13:51:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:21.257 13:51:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:21.257 13:51:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:21.257 13:51:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:21.257 13:51:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:21.257 13:51:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:21.257 13:51:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:21.257 13:51:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:21.257 13:51:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:21.257 13:51:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:21.257 13:51:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:21.257 13:51:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:21.257 13:51:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:21.257 13:51:45 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:21.257 13:51:45 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:10:21.257 13:51:45 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:21.257 00:10:21.257 real 0m2.529s 00:10:21.257 user 0m2.273s 00:10:21.257 sys 0m0.153s 00:10:21.257 13:51:45 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:21.257 13:51:45 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:10:21.257 ************************************ 00:10:21.257 END TEST accel_compare 00:10:21.257 ************************************ 00:10:21.257 13:51:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:21.257 13:51:45 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:10:21.257 13:51:45 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:21.257 13:51:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:21.257 13:51:45 accel -- common/autotest_common.sh@10 -- # set +x 00:10:21.257 ************************************ 00:10:21.257 START TEST accel_xor 00:10:21.257 ************************************ 00:10:21.257 13:51:45 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:10:21.257 13:51:45 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:10:21.257 13:51:45 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:10:21.257 13:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:21.257 13:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:21.257 13:51:45 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:10:21.257 13:51:45 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:21.257 13:51:45 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:10:21.257 13:51:45 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:21.257 13:51:45 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:21.257 13:51:45 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:21.257 13:51:45 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:21.257 13:51:45 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:21.257 13:51:45 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:10:21.257 13:51:45 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:10:21.526 [2024-07-15 13:51:45.804747] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:21.526 [2024-07-15 13:51:45.804889] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65782 ] 00:10:21.526 [2024-07-15 13:51:45.974933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.785 [2024-07-15 13:51:46.163845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:22.044 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:22.045 13:51:46 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:22.045 13:51:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:22.045 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:22.045 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:22.045 13:51:46 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:10:22.045 13:51:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:22.045 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:22.045 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:22.045 13:51:46 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:10:22.045 13:51:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:22.045 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:22.045 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:22.045 13:51:46 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:10:22.045 13:51:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:22.045 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:22.045 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:22.045 13:51:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:22.045 13:51:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:22.045 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:22.045 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:22.045 13:51:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:22.045 13:51:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:22.045 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:22.045 13:51:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:23.948 00:10:23.948 real 0m2.453s 00:10:23.948 user 0m2.205s 00:10:23.948 sys 0m0.145s 00:10:23.948 13:51:48 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:23.948 13:51:48 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:10:23.948 ************************************ 00:10:23.948 END TEST accel_xor 00:10:23.948 ************************************ 00:10:23.948 13:51:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:23.948 13:51:48 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:10:23.948 13:51:48 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:10:23.948 13:51:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:23.948 13:51:48 accel -- common/autotest_common.sh@10 -- # set +x 00:10:23.948 ************************************ 00:10:23.948 START TEST accel_xor 00:10:23.948 ************************************ 00:10:23.948 13:51:48 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:10:23.948 13:51:48 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:23.949 13:51:48 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:23.949 13:51:48 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:23.949 13:51:48 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:23.949 13:51:48 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:23.949 13:51:48 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:10:23.949 13:51:48 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:10:23.949 [2024-07-15 13:51:48.296152] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:23.949 [2024-07-15 13:51:48.296347] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65823 ] 00:10:23.949 [2024-07-15 13:51:48.468653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.207 [2024-07-15 13:51:48.743701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.465 13:51:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:24.465 13:51:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:24.465 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:24.465 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:24.465 13:51:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:24.465 13:51:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:24.465 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:24.466 13:51:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:26.420 13:51:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:26.420 13:51:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:26.420 13:51:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:26.420 13:51:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:26.420 13:51:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:26.420 13:51:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:26.420 13:51:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:26.420 13:51:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:26.420 13:51:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:26.420 13:51:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:26.420 13:51:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:26.420 13:51:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:26.420 13:51:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:26.420 13:51:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:26.420 13:51:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:26.420 13:51:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:26.420 13:51:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:26.420 13:51:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:26.420 13:51:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:26.420 13:51:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:26.420 13:51:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:26.420 13:51:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:26.420 13:51:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:26.420 13:51:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:26.420 13:51:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:26.420 13:51:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:10:26.420 13:51:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:26.420 00:10:26.420 real 0m2.523s 00:10:26.420 user 0m0.016s 00:10:26.420 sys 0m0.002s 00:10:26.420 13:51:50 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:26.420 13:51:50 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:10:26.420 ************************************ 00:10:26.420 END TEST accel_xor 00:10:26.420 ************************************ 00:10:26.420 13:51:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:26.420 13:51:50 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:10:26.420 13:51:50 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:10:26.420 13:51:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:26.420 13:51:50 accel -- common/autotest_common.sh@10 -- # set +x 00:10:26.420 ************************************ 00:10:26.420 START TEST accel_dif_verify 00:10:26.420 ************************************ 00:10:26.420 13:51:50 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:10:26.420 13:51:50 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:10:26.420 13:51:50 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:10:26.420 13:51:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.420 13:51:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.420 13:51:50 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:10:26.420 13:51:50 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:26.420 13:51:50 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:10:26.420 13:51:50 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:26.420 13:51:50 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:26.420 13:51:50 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:26.420 13:51:50 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:26.420 13:51:50 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:26.420 13:51:50 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:10:26.420 13:51:50 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:10:26.420 [2024-07-15 13:51:50.856281] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:26.420 [2024-07-15 13:51:50.856436] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65875 ] 00:10:26.693 [2024-07-15 13:51:51.024582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.693 [2024-07-15 13:51:51.225845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.952 13:51:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:28.852 13:51:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:28.852 13:51:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:28.852 13:51:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:28.852 13:51:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:28.852 13:51:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:28.852 13:51:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:28.853 13:51:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:28.853 13:51:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:28.853 13:51:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:28.853 13:51:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:28.853 13:51:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:28.853 13:51:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:28.853 13:51:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:28.853 13:51:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:28.853 13:51:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:28.853 13:51:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:28.853 13:51:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:28.853 13:51:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:28.853 13:51:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:28.853 13:51:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:28.853 13:51:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:28.853 13:51:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:28.853 13:51:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:28.853 13:51:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:28.853 13:51:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:28.853 13:51:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:10:28.853 13:51:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:28.853 00:10:28.853 real 0m2.442s 00:10:28.853 user 0m2.230s 00:10:28.853 sys 0m0.113s 00:10:28.853 13:51:53 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:28.853 13:51:53 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:10:28.853 ************************************ 00:10:28.853 END TEST accel_dif_verify 00:10:28.853 ************************************ 00:10:28.853 13:51:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:28.853 13:51:53 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:10:28.853 13:51:53 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:10:28.853 13:51:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:28.853 13:51:53 accel -- common/autotest_common.sh@10 -- # set +x 00:10:28.853 ************************************ 00:10:28.853 START TEST accel_dif_generate 00:10:28.853 ************************************ 00:10:28.853 13:51:53 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:10:28.853 13:51:53 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:10:28.853 13:51:53 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:10:28.853 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:28.853 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:28.853 13:51:53 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:10:28.853 13:51:53 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:28.853 13:51:53 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:10:28.853 13:51:53 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:28.853 13:51:53 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:28.853 13:51:53 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:28.853 13:51:53 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:28.853 13:51:53 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:28.853 13:51:53 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:10:28.853 13:51:53 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:10:28.853 [2024-07-15 13:51:53.358012] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:28.853 [2024-07-15 13:51:53.358228] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65916 ] 00:10:29.118 [2024-07-15 13:51:53.524647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.375 [2024-07-15 13:51:53.710731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:29.375 13:51:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:29.376 13:51:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:29.376 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:29.376 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:29.376 13:51:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:29.376 13:51:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:29.376 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:29.376 13:51:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:31.272 13:51:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:31.272 13:51:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:31.272 13:51:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:31.272 13:51:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:31.272 13:51:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:31.272 13:51:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:31.272 13:51:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:31.272 13:51:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:31.272 13:51:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:31.272 13:51:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:31.272 13:51:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:31.272 13:51:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:31.272 13:51:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:31.272 13:51:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:31.272 13:51:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:31.272 13:51:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:31.272 13:51:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:31.272 13:51:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:31.272 13:51:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:31.272 13:51:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:31.272 13:51:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:31.272 13:51:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:31.272 13:51:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:31.272 13:51:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:31.272 13:51:55 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:31.272 13:51:55 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:10:31.272 13:51:55 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:31.272 00:10:31.272 real 0m2.502s 00:10:31.272 user 0m2.251s 00:10:31.272 sys 0m0.148s 00:10:31.272 13:51:55 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:31.272 13:51:55 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:10:31.272 ************************************ 00:10:31.272 END TEST accel_dif_generate 00:10:31.272 ************************************ 00:10:31.529 13:51:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:31.529 13:51:55 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:10:31.529 13:51:55 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:10:31.529 13:51:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:31.529 13:51:55 accel -- common/autotest_common.sh@10 -- # set +x 00:10:31.529 ************************************ 00:10:31.529 START TEST accel_dif_generate_copy 00:10:31.529 ************************************ 00:10:31.529 13:51:55 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:10:31.529 13:51:55 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:10:31.529 13:51:55 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:10:31.529 13:51:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:31.529 13:51:55 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:10:31.529 13:51:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:31.529 13:51:55 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:31.529 13:51:55 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:10:31.529 13:51:55 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:31.529 13:51:55 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:31.529 13:51:55 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:31.529 13:51:55 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:31.529 13:51:55 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:31.529 13:51:55 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:10:31.529 13:51:55 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:10:31.529 [2024-07-15 13:51:55.897350] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:31.529 [2024-07-15 13:51:55.897539] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65963 ] 00:10:31.529 [2024-07-15 13:51:56.068936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.786 [2024-07-15 13:51:56.254192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.042 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:10:32.043 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.043 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.043 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.043 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:10:32.043 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.043 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.043 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.043 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:10:32.043 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.043 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.043 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.043 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:10:32.043 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.043 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.043 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.043 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:10:32.043 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.043 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.043 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.043 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:32.043 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.043 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.043 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.043 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:32.043 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.043 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.043 13:51:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:33.968 13:51:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:33.968 13:51:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:33.968 13:51:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:33.968 13:51:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:33.968 13:51:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:33.968 13:51:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:33.968 13:51:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:33.968 13:51:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:33.968 13:51:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:33.968 13:51:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:33.968 13:51:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:33.968 13:51:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:33.968 13:51:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:33.968 13:51:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:33.968 13:51:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:33.968 13:51:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:33.968 13:51:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:33.968 13:51:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:33.968 13:51:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:33.968 13:51:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:33.968 13:51:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:33.968 13:51:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:33.968 13:51:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:33.968 13:51:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:33.968 13:51:58 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:33.968 13:51:58 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:10:33.968 13:51:58 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:33.968 00:10:33.968 real 0m2.442s 00:10:33.968 user 0m0.014s 00:10:33.968 sys 0m0.004s 00:10:33.968 13:51:58 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:33.968 13:51:58 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:10:33.968 ************************************ 00:10:33.968 END TEST accel_dif_generate_copy 00:10:33.968 ************************************ 00:10:33.968 13:51:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:33.968 13:51:58 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:10:33.968 13:51:58 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:33.968 13:51:58 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:10:33.968 13:51:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:33.968 13:51:58 accel -- common/autotest_common.sh@10 -- # set +x 00:10:33.968 ************************************ 00:10:33.968 START TEST accel_comp 00:10:33.968 ************************************ 00:10:33.968 13:51:58 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:33.968 13:51:58 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:10:33.968 13:51:58 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:10:33.968 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:33.968 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:33.968 13:51:58 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:33.968 13:51:58 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:33.968 13:51:58 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:10:33.968 13:51:58 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:33.968 13:51:58 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:33.968 13:51:58 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:33.968 13:51:58 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:33.968 13:51:58 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:33.968 13:51:58 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:10:33.968 13:51:58 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:10:33.968 [2024-07-15 13:51:58.387642] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:33.968 [2024-07-15 13:51:58.387845] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66009 ] 00:10:34.226 [2024-07-15 13:51:58.564001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.484 [2024-07-15 13:51:58.773671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:34.484 13:51:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:36.384 13:52:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:36.384 13:52:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:36.384 13:52:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:36.384 13:52:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:36.384 13:52:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:36.384 13:52:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:36.384 13:52:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:36.384 13:52:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:36.384 13:52:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:36.384 13:52:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:36.384 13:52:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:36.384 13:52:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:36.384 13:52:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:36.384 13:52:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:36.384 13:52:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:36.384 13:52:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:36.384 13:52:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:36.384 13:52:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:36.384 13:52:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:36.384 13:52:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:36.384 13:52:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:36.384 13:52:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:36.384 13:52:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:36.384 13:52:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:36.384 13:52:00 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:36.384 13:52:00 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:10:36.384 13:52:00 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:36.384 00:10:36.384 real 0m2.476s 00:10:36.384 user 0m2.208s 00:10:36.384 sys 0m0.172s 00:10:36.384 13:52:00 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:36.384 13:52:00 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:10:36.384 ************************************ 00:10:36.384 END TEST accel_comp 00:10:36.384 ************************************ 00:10:36.384 13:52:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:36.384 13:52:00 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:36.384 13:52:00 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:10:36.384 13:52:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:36.384 13:52:00 accel -- common/autotest_common.sh@10 -- # set +x 00:10:36.384 ************************************ 00:10:36.384 START TEST accel_decomp 00:10:36.384 ************************************ 00:10:36.384 13:52:00 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:36.385 13:52:00 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:10:36.385 13:52:00 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:10:36.385 13:52:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:36.385 13:52:00 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:36.385 13:52:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:36.385 13:52:00 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:36.385 13:52:00 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:10:36.385 13:52:00 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:36.385 13:52:00 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:36.385 13:52:00 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:36.385 13:52:00 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:36.385 13:52:00 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:36.385 13:52:00 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:10:36.385 13:52:00 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:10:36.385 [2024-07-15 13:52:00.892213] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:36.385 [2024-07-15 13:52:00.892388] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66050 ] 00:10:36.643 [2024-07-15 13:52:01.057167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.901 [2024-07-15 13:52:01.248047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:36.901 13:52:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:37.159 13:52:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:37.159 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:37.159 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:37.159 13:52:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:37.159 13:52:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:37.159 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:37.159 13:52:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:39.062 13:52:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:39.062 13:52:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:39.062 13:52:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:39.062 13:52:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:39.062 13:52:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:39.062 13:52:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:39.062 13:52:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:39.062 13:52:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:39.062 13:52:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:39.062 13:52:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:39.062 13:52:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:39.062 13:52:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:39.062 13:52:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:39.062 13:52:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:39.062 13:52:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:39.062 13:52:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:39.062 13:52:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:39.062 13:52:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:39.062 13:52:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:39.062 13:52:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:39.062 13:52:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:39.062 13:52:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:39.062 13:52:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:39.062 13:52:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:39.062 13:52:03 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:39.062 13:52:03 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:39.062 13:52:03 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:39.062 00:10:39.062 real 0m2.441s 00:10:39.062 user 0m2.207s 00:10:39.062 sys 0m0.135s 00:10:39.062 13:52:03 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:39.062 13:52:03 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:10:39.062 ************************************ 00:10:39.062 END TEST accel_decomp 00:10:39.062 ************************************ 00:10:39.062 13:52:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:39.062 13:52:03 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:39.062 13:52:03 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:10:39.062 13:52:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:39.062 13:52:03 accel -- common/autotest_common.sh@10 -- # set +x 00:10:39.062 ************************************ 00:10:39.062 START TEST accel_decomp_full 00:10:39.062 ************************************ 00:10:39.062 13:52:03 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:39.062 13:52:03 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:10:39.062 13:52:03 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:10:39.062 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:39.062 13:52:03 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:39.062 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:39.062 13:52:03 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:39.062 13:52:03 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:10:39.062 13:52:03 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:39.062 13:52:03 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:39.062 13:52:03 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:39.062 13:52:03 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:39.062 13:52:03 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:39.062 13:52:03 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:10:39.062 13:52:03 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:10:39.062 [2024-07-15 13:52:03.388007] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:39.062 [2024-07-15 13:52:03.388237] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66097 ] 00:10:39.062 [2024-07-15 13:52:03.577894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.321 [2024-07-15 13:52:03.776712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:39.581 13:52:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:41.483 13:52:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:41.483 13:52:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:41.483 13:52:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:41.483 13:52:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:41.483 13:52:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:41.483 13:52:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:41.483 13:52:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:41.483 13:52:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:41.483 13:52:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:41.483 13:52:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:41.483 13:52:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:41.483 13:52:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:41.483 13:52:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:41.483 13:52:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:41.483 13:52:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:41.483 13:52:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:41.483 13:52:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:41.483 13:52:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:41.483 13:52:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:41.483 13:52:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:41.483 13:52:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:41.483 13:52:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:41.483 13:52:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:41.483 13:52:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:41.483 13:52:05 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:41.483 13:52:05 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:41.483 13:52:05 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:41.483 00:10:41.483 real 0m2.516s 00:10:41.483 user 0m2.259s 00:10:41.483 sys 0m0.158s 00:10:41.483 13:52:05 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:41.483 13:52:05 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:10:41.483 ************************************ 00:10:41.483 END TEST accel_decomp_full 00:10:41.483 ************************************ 00:10:41.483 13:52:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:41.483 13:52:05 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:41.483 13:52:05 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:10:41.483 13:52:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:41.483 13:52:05 accel -- common/autotest_common.sh@10 -- # set +x 00:10:41.483 ************************************ 00:10:41.483 START TEST accel_decomp_mcore 00:10:41.483 ************************************ 00:10:41.483 13:52:05 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:41.483 13:52:05 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:41.483 13:52:05 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:41.483 13:52:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:41.483 13:52:05 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:41.483 13:52:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:41.483 13:52:05 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:41.483 13:52:05 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:41.483 13:52:05 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:41.483 13:52:05 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:41.483 13:52:05 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:41.483 13:52:05 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:41.483 13:52:05 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:41.483 13:52:05 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:41.483 13:52:05 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:41.483 [2024-07-15 13:52:05.931379] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:41.483 [2024-07-15 13:52:05.931518] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66143 ] 00:10:41.751 [2024-07-15 13:52:06.096104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:42.019 [2024-07-15 13:52:06.332213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.019 [2024-07-15 13:52:06.332254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:42.019 [2024-07-15 13:52:06.332412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.019 [2024-07-15 13:52:06.332429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:42.019 13:52:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:44.551 00:10:44.551 real 0m2.614s 00:10:44.551 user 0m0.018s 00:10:44.551 sys 0m0.001s 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:44.551 ************************************ 00:10:44.551 END TEST accel_decomp_mcore 00:10:44.551 ************************************ 00:10:44.551 13:52:08 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:44.551 13:52:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:44.551 13:52:08 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:44.551 13:52:08 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:44.551 13:52:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:44.551 13:52:08 accel -- common/autotest_common.sh@10 -- # set +x 00:10:44.551 ************************************ 00:10:44.551 START TEST accel_decomp_full_mcore 00:10:44.551 ************************************ 00:10:44.551 13:52:08 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:44.551 13:52:08 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:44.551 13:52:08 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:44.551 13:52:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.551 13:52:08 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:44.551 13:52:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.551 13:52:08 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:44.551 13:52:08 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:44.551 13:52:08 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:44.551 13:52:08 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:44.551 13:52:08 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:44.551 13:52:08 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:44.551 13:52:08 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:44.551 13:52:08 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:44.551 13:52:08 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:44.551 [2024-07-15 13:52:08.588987] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:44.551 [2024-07-15 13:52:08.589130] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66193 ] 00:10:44.551 [2024-07-15 13:52:08.762821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:44.551 [2024-07-15 13:52:08.967936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.551 [2024-07-15 13:52:08.968034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:44.551 [2024-07-15 13:52:08.968112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.551 [2024-07-15 13:52:08.968114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.809 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:44.810 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.810 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.810 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.810 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:10:44.810 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.810 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.810 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.810 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:44.810 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.810 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.810 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.810 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:44.810 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.810 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.810 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.810 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:44.810 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.810 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.810 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.810 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:44.810 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.810 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.810 13:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:46.708 ************************************ 00:10:46.708 END TEST accel_decomp_full_mcore 00:10:46.708 ************************************ 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:46.708 00:10:46.708 real 0m2.546s 00:10:46.708 user 0m7.366s 00:10:46.708 sys 0m0.186s 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:46.708 13:52:11 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:46.708 13:52:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:46.708 13:52:11 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:46.708 13:52:11 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:10:46.708 13:52:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:46.708 13:52:11 accel -- common/autotest_common.sh@10 -- # set +x 00:10:46.708 ************************************ 00:10:46.708 START TEST accel_decomp_mthread 00:10:46.708 ************************************ 00:10:46.708 13:52:11 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:46.708 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:46.709 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:46.709 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:46.709 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:46.709 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:46.709 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:46.709 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:46.709 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:46.709 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:46.709 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:46.709 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:46.709 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:46.709 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:46.709 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:46.709 [2024-07-15 13:52:11.181425] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:46.709 [2024-07-15 13:52:11.181609] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66248 ] 00:10:46.966 [2024-07-15 13:52:11.343197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.224 [2024-07-15 13:52:11.531329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.224 13:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:49.124 00:10:49.124 real 0m2.448s 00:10:49.124 user 0m0.020s 00:10:49.124 sys 0m0.000s 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:49.124 ************************************ 00:10:49.124 END TEST accel_decomp_mthread 00:10:49.124 ************************************ 00:10:49.124 13:52:13 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:10:49.124 13:52:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:49.124 13:52:13 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:49.124 13:52:13 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:49.124 13:52:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:49.124 13:52:13 accel -- common/autotest_common.sh@10 -- # set +x 00:10:49.124 ************************************ 00:10:49.124 START TEST accel_decomp_full_mthread 00:10:49.124 ************************************ 00:10:49.124 13:52:13 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:49.124 13:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:49.124 13:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:49.124 13:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.124 13:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.124 13:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:49.124 13:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:49.124 13:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:49.124 13:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:49.124 13:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:49.124 13:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:49.124 13:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:49.124 13:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:49.124 13:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:49.124 13:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:49.382 [2024-07-15 13:52:13.689430] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:49.382 [2024-07-15 13:52:13.689703] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66289 ] 00:10:49.382 [2024-07-15 13:52:13.874181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.639 [2024-07-15 13:52:14.121317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.896 13:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:51.793 00:10:51.793 real 0m2.655s 00:10:51.793 user 0m2.379s 00:10:51.793 sys 0m0.172s 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:51.793 13:52:16 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:10:51.793 ************************************ 00:10:51.793 END TEST accel_decomp_full_mthread 00:10:51.793 ************************************ 00:10:51.793 13:52:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:51.793 13:52:16 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:10:51.793 13:52:16 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:51.793 13:52:16 accel -- accel/accel.sh@137 -- # build_accel_config 00:10:51.793 13:52:16 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:51.793 13:52:16 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:51.793 13:52:16 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:51.793 13:52:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:51.793 13:52:16 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:51.793 13:52:16 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:51.793 13:52:16 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:51.793 13:52:16 accel -- common/autotest_common.sh@10 -- # set +x 00:10:51.793 13:52:16 accel -- accel/accel.sh@40 -- # local IFS=, 00:10:51.793 13:52:16 accel -- accel/accel.sh@41 -- # jq -r . 00:10:51.793 ************************************ 00:10:51.793 START TEST accel_dif_functional_tests 00:10:51.793 ************************************ 00:10:51.793 13:52:16 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:52.052 [2024-07-15 13:52:16.420872] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:52.052 [2024-07-15 13:52:16.421179] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66337 ] 00:10:52.052 [2024-07-15 13:52:16.582320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:52.309 [2024-07-15 13:52:16.774723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.309 [2024-07-15 13:52:16.774791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.309 [2024-07-15 13:52:16.774791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.566 00:10:52.566 00:10:52.566 CUnit - A unit testing framework for C - Version 2.1-3 00:10:52.566 http://cunit.sourceforge.net/ 00:10:52.566 00:10:52.566 00:10:52.566 Suite: accel_dif 00:10:52.566 Test: verify: DIF generated, GUARD check ...passed 00:10:52.566 Test: verify: DIF generated, APPTAG check ...passed 00:10:52.566 Test: verify: DIF generated, REFTAG check ...passed 00:10:52.566 Test: verify: DIF not generated, GUARD check ...passed 00:10:52.566 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 13:52:17.054444] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:52.566 [2024-07-15 13:52:17.054647] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:52.566 passed 00:10:52.566 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 13:52:17.054860] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:52.566 passed 00:10:52.566 Test: verify: APPTAG correct, APPTAG check ...passed 00:10:52.566 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 13:52:17.055004] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:10:52.566 passed 00:10:52.566 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:10:52.566 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:10:52.566 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:10:52.566 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 13:52:17.055785] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:10:52.566 passed 00:10:52.566 Test: verify copy: DIF generated, GUARD check ...passed 00:10:52.566 Test: verify copy: DIF generated, APPTAG check ...passed 00:10:52.566 Test: verify copy: DIF generated, REFTAG check ...passed 00:10:52.566 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 13:52:17.056500] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:52.566 passed 00:10:52.566 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 13:52:17.056699] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:52.566 passed 00:10:52.566 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 13:52:17.056986] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:52.566 passed 00:10:52.566 Test: generate copy: DIF generated, GUARD check ...passed 00:10:52.566 Test: generate copy: DIF generated, APTTAG check ...passed 00:10:52.566 Test: generate copy: DIF generated, REFTAG check ...passed 00:10:52.566 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:10:52.566 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:10:52.566 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:10:52.566 Test: generate copy: iovecs-len validate ...[2024-07-15 13:52:17.058154] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:10:52.566 passed 00:10:52.566 Test: generate copy: buffer alignment validate ...passed 00:10:52.566 00:10:52.566 Run Summary: Type Total Ran Passed Failed Inactive 00:10:52.566 suites 1 1 n/a 0 0 00:10:52.566 tests 26 26 26 0 0 00:10:52.566 asserts 115 115 115 0 n/a 00:10:52.566 00:10:52.566 Elapsed time = 0.012 seconds 00:10:53.938 00:10:53.938 real 0m1.950s 00:10:53.938 user 0m3.794s 00:10:53.938 sys 0m0.201s 00:10:53.938 13:52:18 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:53.938 ************************************ 00:10:53.938 13:52:18 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:10:53.938 END TEST accel_dif_functional_tests 00:10:53.938 ************************************ 00:10:53.938 13:52:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:53.938 ************************************ 00:10:53.938 END TEST accel 00:10:53.938 ************************************ 00:10:53.938 00:10:53.938 real 0m59.704s 00:10:53.938 user 1m5.722s 00:10:53.938 sys 0m4.662s 00:10:53.938 13:52:18 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:53.938 13:52:18 accel -- common/autotest_common.sh@10 -- # set +x 00:10:53.938 13:52:18 -- common/autotest_common.sh@1142 -- # return 0 00:10:53.938 13:52:18 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:10:53.938 13:52:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:53.938 13:52:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:53.938 13:52:18 -- common/autotest_common.sh@10 -- # set +x 00:10:53.938 ************************************ 00:10:53.938 START TEST accel_rpc 00:10:53.938 ************************************ 00:10:53.938 13:52:18 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:10:53.938 * Looking for test storage... 00:10:53.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.938 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:10:53.938 13:52:18 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:53.938 13:52:18 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=66419 00:10:53.938 13:52:18 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:10:53.938 13:52:18 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 66419 00:10:53.938 13:52:18 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 66419 ']' 00:10:53.938 13:52:18 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.938 13:52:18 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:53.938 13:52:18 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.938 13:52:18 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:53.938 13:52:18 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.196 [2024-07-15 13:52:18.559393] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:54.196 [2024-07-15 13:52:18.560128] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66419 ] 00:10:54.453 [2024-07-15 13:52:18.753125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.710 [2024-07-15 13:52:19.007873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.967 13:52:19 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:54.967 13:52:19 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:54.967 13:52:19 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:10:54.967 13:52:19 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:10:54.967 13:52:19 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:10:54.967 13:52:19 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:10:54.967 13:52:19 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:10:54.967 13:52:19 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:54.967 13:52:19 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:54.967 13:52:19 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.967 ************************************ 00:10:54.967 START TEST accel_assign_opcode 00:10:54.967 ************************************ 00:10:54.967 13:52:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:10:54.967 13:52:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:10:54.967 13:52:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.967 13:52:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:54.967 [2024-07-15 13:52:19.484770] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:10:54.967 13:52:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.967 13:52:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:10:54.967 13:52:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.967 13:52:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:54.967 [2024-07-15 13:52:19.492748] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:10:54.967 13:52:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.967 13:52:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:10:54.967 13:52:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.967 13:52:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:55.899 13:52:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.899 13:52:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:10:55.899 13:52:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.899 13:52:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:10:55.899 13:52:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:55.899 13:52:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:10:55.899 13:52:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.899 software 00:10:55.899 ************************************ 00:10:55.899 END TEST accel_assign_opcode 00:10:55.899 ************************************ 00:10:55.899 00:10:55.899 real 0m0.797s 00:10:55.899 user 0m0.059s 00:10:55.899 sys 0m0.006s 00:10:55.899 13:52:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:55.899 13:52:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:55.899 13:52:20 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:10:55.899 13:52:20 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 66419 00:10:55.899 13:52:20 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 66419 ']' 00:10:55.899 13:52:20 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 66419 00:10:55.899 13:52:20 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:10:55.899 13:52:20 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:55.899 13:52:20 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66419 00:10:55.899 killing process with pid 66419 00:10:55.899 13:52:20 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:55.899 13:52:20 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:55.899 13:52:20 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66419' 00:10:55.899 13:52:20 accel_rpc -- common/autotest_common.sh@967 -- # kill 66419 00:10:55.899 13:52:20 accel_rpc -- common/autotest_common.sh@972 -- # wait 66419 00:10:58.427 00:10:58.427 real 0m4.192s 00:10:58.427 user 0m4.273s 00:10:58.427 sys 0m0.457s 00:10:58.427 13:52:22 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:58.427 13:52:22 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.427 ************************************ 00:10:58.427 END TEST accel_rpc 00:10:58.427 ************************************ 00:10:58.428 13:52:22 -- common/autotest_common.sh@1142 -- # return 0 00:10:58.428 13:52:22 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:58.428 13:52:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:58.428 13:52:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.428 13:52:22 -- common/autotest_common.sh@10 -- # set +x 00:10:58.428 ************************************ 00:10:58.428 START TEST app_cmdline 00:10:58.428 ************************************ 00:10:58.428 13:52:22 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:58.428 * Looking for test storage... 00:10:58.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:58.428 13:52:22 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:58.428 13:52:22 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=66535 00:10:58.428 13:52:22 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:58.428 13:52:22 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 66535 00:10:58.428 13:52:22 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 66535 ']' 00:10:58.428 13:52:22 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.428 13:52:22 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:58.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.428 13:52:22 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.428 13:52:22 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:58.428 13:52:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:58.428 [2024-07-15 13:52:22.773081] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:58.428 [2024-07-15 13:52:22.773257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66535 ] 00:10:58.428 [2024-07-15 13:52:22.942997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.686 [2024-07-15 13:52:23.129275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.620 13:52:23 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:59.620 13:52:23 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:10:59.620 13:52:23 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:10:59.620 { 00:10:59.620 "version": "SPDK v24.09-pre git sha1 a95bbf233", 00:10:59.620 "fields": { 00:10:59.620 "major": 24, 00:10:59.620 "minor": 9, 00:10:59.620 "patch": 0, 00:10:59.620 "suffix": "-pre", 00:10:59.620 "commit": "a95bbf233" 00:10:59.620 } 00:10:59.620 } 00:10:59.620 13:52:24 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:59.620 13:52:24 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:59.620 13:52:24 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:59.620 13:52:24 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:59.620 13:52:24 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:59.620 13:52:24 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:59.620 13:52:24 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:59.620 13:52:24 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.620 13:52:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:59.620 13:52:24 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.620 13:52:24 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:59.620 13:52:24 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:59.620 13:52:24 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:59.620 13:52:24 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:10:59.620 13:52:24 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:59.620 13:52:24 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:59.877 13:52:24 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:59.877 13:52:24 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:59.877 13:52:24 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:59.877 13:52:24 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:59.877 13:52:24 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:59.877 13:52:24 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:59.877 13:52:24 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:59.877 13:52:24 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:00.135 request: 00:11:00.135 { 00:11:00.135 "method": "env_dpdk_get_mem_stats", 00:11:00.135 "req_id": 1 00:11:00.135 } 00:11:00.135 Got JSON-RPC error response 00:11:00.135 response: 00:11:00.135 { 00:11:00.135 "code": -32601, 00:11:00.135 "message": "Method not found" 00:11:00.135 } 00:11:00.135 13:52:24 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:11:00.135 13:52:24 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:00.135 13:52:24 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:00.135 13:52:24 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:00.135 13:52:24 app_cmdline -- app/cmdline.sh@1 -- # killprocess 66535 00:11:00.135 13:52:24 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 66535 ']' 00:11:00.135 13:52:24 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 66535 00:11:00.135 13:52:24 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:11:00.135 13:52:24 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:00.135 13:52:24 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66535 00:11:00.135 killing process with pid 66535 00:11:00.135 13:52:24 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:00.135 13:52:24 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:00.135 13:52:24 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66535' 00:11:00.135 13:52:24 app_cmdline -- common/autotest_common.sh@967 -- # kill 66535 00:11:00.135 13:52:24 app_cmdline -- common/autotest_common.sh@972 -- # wait 66535 00:11:02.666 00:11:02.666 real 0m4.011s 00:11:02.666 user 0m4.536s 00:11:02.666 sys 0m0.492s 00:11:02.666 13:52:26 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:02.666 ************************************ 00:11:02.666 END TEST app_cmdline 00:11:02.666 ************************************ 00:11:02.666 13:52:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:02.666 13:52:26 -- common/autotest_common.sh@1142 -- # return 0 00:11:02.666 13:52:26 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:02.666 13:52:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:02.666 13:52:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:02.666 13:52:26 -- common/autotest_common.sh@10 -- # set +x 00:11:02.666 ************************************ 00:11:02.666 START TEST version 00:11:02.666 ************************************ 00:11:02.666 13:52:26 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:02.666 * Looking for test storage... 00:11:02.666 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:02.666 13:52:26 version -- app/version.sh@17 -- # get_header_version major 00:11:02.666 13:52:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:02.666 13:52:26 version -- app/version.sh@14 -- # cut -f2 00:11:02.666 13:52:26 version -- app/version.sh@14 -- # tr -d '"' 00:11:02.666 13:52:26 version -- app/version.sh@17 -- # major=24 00:11:02.666 13:52:26 version -- app/version.sh@18 -- # get_header_version minor 00:11:02.666 13:52:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:02.666 13:52:26 version -- app/version.sh@14 -- # tr -d '"' 00:11:02.666 13:52:26 version -- app/version.sh@14 -- # cut -f2 00:11:02.666 13:52:26 version -- app/version.sh@18 -- # minor=9 00:11:02.666 13:52:26 version -- app/version.sh@19 -- # get_header_version patch 00:11:02.666 13:52:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:02.666 13:52:26 version -- app/version.sh@14 -- # cut -f2 00:11:02.666 13:52:26 version -- app/version.sh@14 -- # tr -d '"' 00:11:02.666 13:52:26 version -- app/version.sh@19 -- # patch=0 00:11:02.666 13:52:26 version -- app/version.sh@20 -- # get_header_version suffix 00:11:02.666 13:52:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:02.666 13:52:26 version -- app/version.sh@14 -- # cut -f2 00:11:02.666 13:52:26 version -- app/version.sh@14 -- # tr -d '"' 00:11:02.666 13:52:26 version -- app/version.sh@20 -- # suffix=-pre 00:11:02.666 13:52:26 version -- app/version.sh@22 -- # version=24.9 00:11:02.666 13:52:26 version -- app/version.sh@25 -- # (( patch != 0 )) 00:11:02.666 13:52:26 version -- app/version.sh@28 -- # version=24.9rc0 00:11:02.666 13:52:26 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:02.666 13:52:26 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:02.666 13:52:26 version -- app/version.sh@30 -- # py_version=24.9rc0 00:11:02.666 13:52:26 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:11:02.666 00:11:02.666 real 0m0.133s 00:11:02.666 user 0m0.064s 00:11:02.666 sys 0m0.098s 00:11:02.666 ************************************ 00:11:02.666 END TEST version 00:11:02.666 ************************************ 00:11:02.666 13:52:26 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:02.666 13:52:26 version -- common/autotest_common.sh@10 -- # set +x 00:11:02.666 13:52:26 -- common/autotest_common.sh@1142 -- # return 0 00:11:02.666 13:52:26 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:11:02.666 13:52:26 -- spdk/autotest.sh@198 -- # uname -s 00:11:02.666 13:52:26 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:11:02.666 13:52:26 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:11:02.666 13:52:26 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:11:02.666 13:52:26 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:11:02.666 13:52:26 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:11:02.666 13:52:26 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:02.666 13:52:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:02.667 13:52:26 -- common/autotest_common.sh@10 -- # set +x 00:11:02.667 ************************************ 00:11:02.667 START TEST blockdev_nvme 00:11:02.667 ************************************ 00:11:02.667 13:52:26 blockdev_nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:11:02.667 * Looking for test storage... 00:11:02.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:02.667 13:52:26 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:02.667 13:52:26 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:11:02.667 13:52:26 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:02.667 13:52:26 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:02.667 13:52:26 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:02.667 13:52:26 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:02.667 13:52:26 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:11:02.667 13:52:26 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:11:02.667 13:52:26 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:11:02.667 13:52:26 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:11:02.667 13:52:26 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:11:02.667 13:52:26 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:11:02.667 13:52:26 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:11:02.667 13:52:26 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:11:02.667 13:52:26 blockdev_nvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:11:02.667 13:52:26 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:11:02.667 13:52:26 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:11:02.667 13:52:26 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:11:02.667 13:52:26 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:11:02.667 13:52:26 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:11:02.667 13:52:26 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:11:02.667 13:52:26 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:11:02.667 13:52:26 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:11:02.667 13:52:26 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:11:02.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.667 13:52:26 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=66702 00:11:02.667 13:52:26 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:02.667 13:52:26 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:02.667 13:52:26 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 66702 00:11:02.667 13:52:26 blockdev_nvme -- common/autotest_common.sh@829 -- # '[' -z 66702 ']' 00:11:02.667 13:52:26 blockdev_nvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.667 13:52:26 blockdev_nvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:02.667 13:52:26 blockdev_nvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.667 13:52:26 blockdev_nvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:02.667 13:52:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:02.667 [2024-07-15 13:52:26.996615] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:11:02.667 [2024-07-15 13:52:26.996775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66702 ] 00:11:02.667 [2024-07-15 13:52:27.159489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.925 [2024-07-15 13:52:27.347822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.858 13:52:28 blockdev_nvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:03.858 13:52:28 blockdev_nvme -- common/autotest_common.sh@862 -- # return 0 00:11:03.858 13:52:28 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:11:03.858 13:52:28 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:11:03.858 13:52:28 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:11:03.858 13:52:28 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:11:03.858 13:52:28 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:03.858 13:52:28 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:11:03.858 13:52:28 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.858 13:52:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:04.117 13:52:28 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.117 13:52:28 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:11:04.117 13:52:28 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.117 13:52:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:04.117 13:52:28 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.117 13:52:28 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:11:04.117 13:52:28 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:11:04.117 13:52:28 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.117 13:52:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:04.117 13:52:28 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.117 13:52:28 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:11:04.117 13:52:28 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.117 13:52:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:04.117 13:52:28 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.117 13:52:28 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:04.117 13:52:28 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.117 13:52:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:04.117 13:52:28 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.117 13:52:28 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:11:04.117 13:52:28 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:11:04.117 13:52:28 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.117 13:52:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:04.117 13:52:28 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:11:04.117 13:52:28 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.117 13:52:28 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:11:04.117 13:52:28 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:11:04.118 13:52:28 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "b60bf0de-9c28-4934-aa3e-392402cc7130"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b60bf0de-9c28-4934-aa3e-392402cc7130",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "98ab8e7b-f43b-42a0-8417-69eaf3a80001"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "98ab8e7b-f43b-42a0-8417-69eaf3a80001",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "8932a7ca-06d5-4414-94ec-ed6edc630c58"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8932a7ca-06d5-4414-94ec-ed6edc630c58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "ae104a28-7949-45bd-905a-d4e687c032d6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ae104a28-7949-45bd-905a-d4e687c032d6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "58e14a0d-a87f-4f3b-8b63-fbeaa588eb62"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "58e14a0d-a87f-4f3b-8b63-fbeaa588eb62",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "7b01585d-cb02-473f-a0b1-5190ea22c937"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "7b01585d-cb02-473f-a0b1-5190ea22c937",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:11:04.118 13:52:28 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:11:04.118 13:52:28 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:11:04.118 13:52:28 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:11:04.118 13:52:28 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 66702 00:11:04.118 13:52:28 blockdev_nvme -- common/autotest_common.sh@948 -- # '[' -z 66702 ']' 00:11:04.118 13:52:28 blockdev_nvme -- common/autotest_common.sh@952 -- # kill -0 66702 00:11:04.118 13:52:28 blockdev_nvme -- common/autotest_common.sh@953 -- # uname 00:11:04.118 13:52:28 blockdev_nvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:04.118 13:52:28 blockdev_nvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66702 00:11:04.118 killing process with pid 66702 00:11:04.118 13:52:28 blockdev_nvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:04.118 13:52:28 blockdev_nvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:04.118 13:52:28 blockdev_nvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66702' 00:11:04.118 13:52:28 blockdev_nvme -- common/autotest_common.sh@967 -- # kill 66702 00:11:04.118 13:52:28 blockdev_nvme -- common/autotest_common.sh@972 -- # wait 66702 00:11:06.646 13:52:30 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:06.646 13:52:30 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:06.646 13:52:30 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:06.646 13:52:30 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:06.646 13:52:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:06.646 ************************************ 00:11:06.646 START TEST bdev_hello_world 00:11:06.646 ************************************ 00:11:06.646 13:52:30 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:06.646 [2024-07-15 13:52:30.850176] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:11:06.646 [2024-07-15 13:52:30.850414] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66798 ] 00:11:06.646 [2024-07-15 13:52:31.020718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.905 [2024-07-15 13:52:31.245252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.470 [2024-07-15 13:52:31.857031] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:07.470 [2024-07-15 13:52:31.857098] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:11:07.470 [2024-07-15 13:52:31.857131] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:07.470 [2024-07-15 13:52:31.860173] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:07.470 [2024-07-15 13:52:31.860765] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:07.470 [2024-07-15 13:52:31.860811] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:07.470 [2024-07-15 13:52:31.861004] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:07.470 00:11:07.470 [2024-07-15 13:52:31.861039] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:08.405 00:11:08.405 real 0m2.121s 00:11:08.405 user 0m1.784s 00:11:08.405 sys 0m0.226s 00:11:08.405 ************************************ 00:11:08.405 END TEST bdev_hello_world 00:11:08.405 ************************************ 00:11:08.405 13:52:32 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:08.405 13:52:32 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:08.405 13:52:32 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:08.405 13:52:32 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:11:08.405 13:52:32 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:08.405 13:52:32 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:08.405 13:52:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:08.405 ************************************ 00:11:08.405 START TEST bdev_bounds 00:11:08.405 ************************************ 00:11:08.405 13:52:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:11:08.405 13:52:32 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=66840 00:11:08.405 13:52:32 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:08.405 13:52:32 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:08.405 Process bdevio pid: 66840 00:11:08.405 13:52:32 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 66840' 00:11:08.405 13:52:32 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 66840 00:11:08.405 13:52:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 66840 ']' 00:11:08.405 13:52:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.405 13:52:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:08.405 13:52:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.405 13:52:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:08.405 13:52:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:08.663 [2024-07-15 13:52:33.000331] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:11:08.663 [2024-07-15 13:52:33.000485] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66840 ] 00:11:08.663 [2024-07-15 13:52:33.165729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:08.926 [2024-07-15 13:52:33.353808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.926 [2024-07-15 13:52:33.353881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.926 [2024-07-15 13:52:33.353886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.519 13:52:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:09.519 13:52:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:11:09.520 13:52:34 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:09.778 I/O targets: 00:11:09.778 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:11:09.778 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:11:09.778 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:09.778 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:09.778 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:09.778 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:11:09.778 00:11:09.778 00:11:09.778 CUnit - A unit testing framework for C - Version 2.1-3 00:11:09.778 http://cunit.sourceforge.net/ 00:11:09.778 00:11:09.778 00:11:09.778 Suite: bdevio tests on: Nvme3n1 00:11:09.778 Test: blockdev write read block ...passed 00:11:09.778 Test: blockdev write zeroes read block ...passed 00:11:09.778 Test: blockdev write zeroes read no split ...passed 00:11:09.778 Test: blockdev write zeroes read split ...passed 00:11:09.778 Test: blockdev write zeroes read split partial ...passed 00:11:09.778 Test: blockdev reset ...[2024-07-15 13:52:34.204093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:11:09.778 [2024-07-15 13:52:34.208015] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:09.778 passed 00:11:09.778 Test: blockdev write read 8 blocks ...passed 00:11:09.778 Test: blockdev write read size > 128k ...passed 00:11:09.778 Test: blockdev write read invalid size ...passed 00:11:09.778 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:09.778 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:09.778 Test: blockdev write read max offset ...passed 00:11:09.778 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:09.778 Test: blockdev writev readv 8 blocks ...passed 00:11:09.778 Test: blockdev writev readv 30 x 1block ...passed 00:11:09.778 Test: blockdev writev readv block ...passed 00:11:09.778 Test: blockdev writev readv size > 128k ...passed 00:11:09.778 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:09.778 Test: blockdev comparev and writev ...[2024-07-15 13:52:34.217472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26560a000 len:0x1000 00:11:09.778 [2024-07-15 13:52:34.217536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:09.778 passed 00:11:09.778 Test: blockdev nvme passthru rw ...passed 00:11:09.778 Test: blockdev nvme passthru vendor specific ...[2024-07-15 13:52:34.218298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:09.778 [2024-07-15 13:52:34.218354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:09.778 passed 00:11:09.778 Test: blockdev nvme admin passthru ...passed 00:11:09.778 Test: blockdev copy ...passed 00:11:09.778 Suite: bdevio tests on: Nvme2n3 00:11:09.778 Test: blockdev write read block ...passed 00:11:09.778 Test: blockdev write zeroes read block ...passed 00:11:09.778 Test: blockdev write zeroes read no split ...passed 00:11:09.778 Test: blockdev write zeroes read split ...passed 00:11:09.779 Test: blockdev write zeroes read split partial ...passed 00:11:09.779 Test: blockdev reset ...[2024-07-15 13:52:34.298172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:11:09.779 [2024-07-15 13:52:34.302439] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:09.779 passed 00:11:09.779 Test: blockdev write read 8 blocks ...passed 00:11:09.779 Test: blockdev write read size > 128k ...passed 00:11:09.779 Test: blockdev write read invalid size ...passed 00:11:09.779 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:09.779 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:09.779 Test: blockdev write read max offset ...passed 00:11:09.779 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:09.779 Test: blockdev writev readv 8 blocks ...passed 00:11:09.779 Test: blockdev writev readv 30 x 1block ...passed 00:11:09.779 Test: blockdev writev readv block ...passed 00:11:09.779 Test: blockdev writev readv size > 128k ...passed 00:11:09.779 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:09.779 Test: blockdev comparev and writev ...[2024-07-15 13:52:34.311712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x274e04000 len:0x1000 00:11:09.779 [2024-07-15 13:52:34.311777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:09.779 passed 00:11:09.779 Test: blockdev nvme passthru rw ...passed 00:11:09.779 Test: blockdev nvme passthru vendor specific ...passed 00:11:09.779 Test: blockdev nvme admin passthru ...[2024-07-15 13:52:34.312630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:09.779 [2024-07-15 13:52:34.312680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:09.779 passed 00:11:09.779 Test: blockdev copy ...passed 00:11:09.779 Suite: bdevio tests on: Nvme2n2 00:11:09.779 Test: blockdev write read block ...passed 00:11:10.055 Test: blockdev write zeroes read block ...passed 00:11:10.055 Test: blockdev write zeroes read no split ...passed 00:11:10.055 Test: blockdev write zeroes read split ...passed 00:11:10.055 Test: blockdev write zeroes read split partial ...passed 00:11:10.055 Test: blockdev reset ...[2024-07-15 13:52:34.383149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:11:10.055 [2024-07-15 13:52:34.387862] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:10.055 passed 00:11:10.055 Test: blockdev write read 8 blocks ...passed 00:11:10.055 Test: blockdev write read size > 128k ...passed 00:11:10.055 Test: blockdev write read invalid size ...passed 00:11:10.055 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:10.055 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:10.055 Test: blockdev write read max offset ...passed 00:11:10.055 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:10.055 Test: blockdev writev readv 8 blocks ...passed 00:11:10.055 Test: blockdev writev readv 30 x 1block ...passed 00:11:10.055 Test: blockdev writev readv block ...passed 00:11:10.055 Test: blockdev writev readv size > 128k ...passed 00:11:10.055 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:10.055 Test: blockdev comparev and writev ...[2024-07-15 13:52:34.396114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27103a000 len:0x1000 00:11:10.055 [2024-07-15 13:52:34.396212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:10.055 passed 00:11:10.055 Test: blockdev nvme passthru rw ...passed 00:11:10.055 Test: blockdev nvme passthru vendor specific ...passed 00:11:10.055 Test: blockdev nvme admin passthru ...[2024-07-15 13:52:34.397117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:10.055 [2024-07-15 13:52:34.397194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:10.055 passed 00:11:10.055 Test: blockdev copy ...passed 00:11:10.055 Suite: bdevio tests on: Nvme2n1 00:11:10.055 Test: blockdev write read block ...passed 00:11:10.055 Test: blockdev write zeroes read block ...passed 00:11:10.055 Test: blockdev write zeroes read no split ...passed 00:11:10.055 Test: blockdev write zeroes read split ...passed 00:11:10.055 Test: blockdev write zeroes read split partial ...passed 00:11:10.055 Test: blockdev reset ...[2024-07-15 13:52:34.462290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:11:10.055 [2024-07-15 13:52:34.466422] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:10.055 passed 00:11:10.055 Test: blockdev write read 8 blocks ...passed 00:11:10.055 Test: blockdev write read size > 128k ...passed 00:11:10.055 Test: blockdev write read invalid size ...passed 00:11:10.055 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:10.055 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:10.055 Test: blockdev write read max offset ...passed 00:11:10.055 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:10.055 Test: blockdev writev readv 8 blocks ...passed 00:11:10.055 Test: blockdev writev readv 30 x 1block ...passed 00:11:10.055 Test: blockdev writev readv block ...passed 00:11:10.055 Test: blockdev writev readv size > 128k ...passed 00:11:10.055 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:10.055 Test: blockdev comparev and writev ...[2024-07-15 13:52:34.474365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x271034000 len:0x1000 00:11:10.055 [2024-07-15 13:52:34.474431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:10.055 passed 00:11:10.055 Test: blockdev nvme passthru rw ...passed 00:11:10.055 Test: blockdev nvme passthru vendor specific ...[2024-07-15 13:52:34.475254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:10.055 passed 00:11:10.055 Test: blockdev nvme admin passthru ...[2024-07-15 13:52:34.475298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:10.055 passed 00:11:10.055 Test: blockdev copy ...passed 00:11:10.055 Suite: bdevio tests on: Nvme1n1 00:11:10.055 Test: blockdev write read block ...passed 00:11:10.055 Test: blockdev write zeroes read block ...passed 00:11:10.055 Test: blockdev write zeroes read no split ...passed 00:11:10.055 Test: blockdev write zeroes read split ...passed 00:11:10.055 Test: blockdev write zeroes read split partial ...passed 00:11:10.055 Test: blockdev reset ...[2024-07-15 13:52:34.548284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:11:10.055 passed 00:11:10.055 Test: blockdev write read 8 blocks ...[2024-07-15 13:52:34.551948] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:10.055 passed 00:11:10.055 Test: blockdev write read size > 128k ...passed 00:11:10.055 Test: blockdev write read invalid size ...passed 00:11:10.055 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:10.055 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:10.055 Test: blockdev write read max offset ...passed 00:11:10.055 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:10.055 Test: blockdev writev readv 8 blocks ...passed 00:11:10.055 Test: blockdev writev readv 30 x 1block ...passed 00:11:10.055 Test: blockdev writev readv block ...passed 00:11:10.055 Test: blockdev writev readv size > 128k ...passed 00:11:10.055 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:10.055 Test: blockdev comparev and writev ...[2024-07-15 13:52:34.560497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x271030000 len:0x1000 00:11:10.055 [2024-07-15 13:52:34.560563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:10.055 passed 00:11:10.055 Test: blockdev nvme passthru rw ...passed 00:11:10.055 Test: blockdev nvme passthru vendor specific ...passed 00:11:10.055 Test: blockdev nvme admin passthru ...[2024-07-15 13:52:34.561417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:10.055 [2024-07-15 13:52:34.561465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:10.055 passed 00:11:10.055 Test: blockdev copy ...passed 00:11:10.055 Suite: bdevio tests on: Nvme0n1 00:11:10.055 Test: blockdev write read block ...passed 00:11:10.055 Test: blockdev write zeroes read block ...passed 00:11:10.055 Test: blockdev write zeroes read no split ...passed 00:11:10.315 Test: blockdev write zeroes read split ...passed 00:11:10.315 Test: blockdev write zeroes read split partial ...passed 00:11:10.315 Test: blockdev reset ...[2024-07-15 13:52:34.628661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:11:10.315 [2024-07-15 13:52:34.632434] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:10.315 passed 00:11:10.315 Test: blockdev write read 8 blocks ...passed 00:11:10.315 Test: blockdev write read size > 128k ...passed 00:11:10.315 Test: blockdev write read invalid size ...passed 00:11:10.315 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:10.315 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:10.315 Test: blockdev write read max offset ...passed 00:11:10.315 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:10.315 Test: blockdev writev readv 8 blocks ...passed 00:11:10.315 Test: blockdev writev readv 30 x 1block ...passed 00:11:10.315 Test: blockdev writev readv block ...passed 00:11:10.315 Test: blockdev writev readv size > 128k ...passed 00:11:10.315 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:10.315 Test: blockdev comparev and writev ...passed 00:11:10.315 Test: blockdev nvme passthru rw ...[2024-07-15 13:52:34.640167] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:11:10.315 separate metadata which is not supported yet. 00:11:10.315 passed 00:11:10.315 Test: blockdev nvme passthru vendor specific ...passed 00:11:10.315 Test: blockdev nvme admin passthru ...[2024-07-15 13:52:34.640663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:11:10.315 [2024-07-15 13:52:34.640725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:11:10.315 passed 00:11:10.315 Test: blockdev copy ...passed 00:11:10.315 00:11:10.315 Run Summary: Type Total Ran Passed Failed Inactive 00:11:10.315 suites 6 6 n/a 0 0 00:11:10.315 tests 138 138 138 0 0 00:11:10.315 asserts 893 893 893 0 n/a 00:11:10.315 00:11:10.315 Elapsed time = 1.408 seconds 00:11:10.315 0 00:11:10.315 13:52:34 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 66840 00:11:10.315 13:52:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 66840 ']' 00:11:10.315 13:52:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 66840 00:11:10.315 13:52:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:11:10.315 13:52:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:10.315 13:52:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66840 00:11:10.315 13:52:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:10.315 13:52:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:10.315 13:52:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66840' 00:11:10.315 killing process with pid 66840 00:11:10.315 13:52:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 66840 00:11:10.315 13:52:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 66840 00:11:11.249 13:52:35 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:11:11.249 00:11:11.249 real 0m2.765s 00:11:11.249 user 0m6.864s 00:11:11.249 sys 0m0.333s 00:11:11.249 13:52:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:11.249 13:52:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:11.249 ************************************ 00:11:11.249 END TEST bdev_bounds 00:11:11.249 ************************************ 00:11:11.249 13:52:35 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:11.249 13:52:35 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:11.249 13:52:35 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:11.249 13:52:35 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:11.249 13:52:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:11.249 ************************************ 00:11:11.249 START TEST bdev_nbd 00:11:11.249 ************************************ 00:11:11.249 13:52:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:11.249 13:52:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:11:11.249 13:52:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:11:11.249 13:52:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:11.249 13:52:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:11.249 13:52:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:11.249 13:52:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:11:11.249 13:52:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:11:11.249 13:52:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:11:11.249 13:52:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:11.249 13:52:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:11:11.249 13:52:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:11:11.249 13:52:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:11.249 13:52:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:11:11.249 13:52:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:11.249 13:52:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:11:11.249 13:52:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=66900 00:11:11.249 13:52:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:11.249 13:52:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:11.249 13:52:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 66900 /var/tmp/spdk-nbd.sock 00:11:11.249 13:52:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 66900 ']' 00:11:11.249 13:52:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:11.249 13:52:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:11.249 13:52:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:11.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:11.249 13:52:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:11.249 13:52:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:11.507 [2024-07-15 13:52:35.825622] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:11:11.507 [2024-07-15 13:52:35.825765] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.507 [2024-07-15 13:52:35.989482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.764 [2024-07-15 13:52:36.220596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.699 13:52:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:12.699 13:52:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:11:12.700 13:52:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:12.700 13:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:12.700 13:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:12.700 13:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:12.700 13:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:12.700 13:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:12.700 13:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:12.700 13:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:12.700 13:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:11:12.700 13:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:12.700 13:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:12.700 13:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:12.700 13:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:11:12.700 13:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:12.700 13:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:12.700 13:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:12.700 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:11:12.700 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:12.700 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:12.700 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:12.700 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:11:12.700 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:12.700 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:12.700 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:12.700 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:12.700 1+0 records in 00:11:12.700 1+0 records out 00:11:12.700 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000930175 s, 4.4 MB/s 00:11:12.700 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:12.700 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:12.700 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:12.700 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:12.700 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:12.700 13:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:12.700 13:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:12.700 13:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:11:13.266 13:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:13.266 13:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:13.266 13:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:13.266 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:11:13.266 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:13.266 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:13.266 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:13.266 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:11:13.266 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:13.266 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:13.266 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:13.266 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:13.266 1+0 records in 00:11:13.266 1+0 records out 00:11:13.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000692769 s, 5.9 MB/s 00:11:13.266 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:13.266 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:13.266 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:13.266 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:13.266 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:13.266 13:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:13.266 13:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:13.266 13:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:11:13.524 13:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:13.524 13:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:13.524 13:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:13.524 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:11:13.524 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:13.524 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:13.524 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:13.524 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:11:13.524 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:13.524 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:13.524 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:13.525 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:13.525 1+0 records in 00:11:13.525 1+0 records out 00:11:13.525 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562762 s, 7.3 MB/s 00:11:13.525 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:13.525 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:13.525 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:13.525 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:13.525 13:52:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:13.525 13:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:13.525 13:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:13.525 13:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:11:13.783 13:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:13.783 13:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:13.783 13:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:13.783 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:11:13.783 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:13.783 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:13.783 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:13.783 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:11:13.783 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:13.783 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:13.783 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:13.783 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:13.783 1+0 records in 00:11:13.783 1+0 records out 00:11:13.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000719326 s, 5.7 MB/s 00:11:13.783 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:13.783 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:13.783 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:13.783 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:13.783 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:13.783 13:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:13.783 13:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:13.783 13:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:11:14.041 13:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:14.041 13:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:14.041 13:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:14.041 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:11:14.042 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:14.042 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:14.042 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:14.042 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:11:14.042 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:14.042 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:14.042 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:14.042 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:14.042 1+0 records in 00:11:14.042 1+0 records out 00:11:14.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000730613 s, 5.6 MB/s 00:11:14.042 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:14.042 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:14.042 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:14.042 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:14.042 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:14.042 13:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:14.042 13:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:14.042 13:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:11:14.300 13:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:14.300 13:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:14.301 13:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:14.301 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:11:14.301 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:14.301 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:14.301 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:14.301 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:11:14.301 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:14.301 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:14.301 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:14.301 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:14.301 1+0 records in 00:11:14.301 1+0 records out 00:11:14.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000611841 s, 6.7 MB/s 00:11:14.301 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:14.301 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:14.301 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:14.301 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:14.301 13:52:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:14.301 13:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:14.301 13:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:14.301 13:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:14.886 13:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:14.886 { 00:11:14.886 "nbd_device": "/dev/nbd0", 00:11:14.886 "bdev_name": "Nvme0n1" 00:11:14.886 }, 00:11:14.886 { 00:11:14.886 "nbd_device": "/dev/nbd1", 00:11:14.886 "bdev_name": "Nvme1n1" 00:11:14.886 }, 00:11:14.886 { 00:11:14.886 "nbd_device": "/dev/nbd2", 00:11:14.886 "bdev_name": "Nvme2n1" 00:11:14.886 }, 00:11:14.886 { 00:11:14.886 "nbd_device": "/dev/nbd3", 00:11:14.886 "bdev_name": "Nvme2n2" 00:11:14.886 }, 00:11:14.886 { 00:11:14.886 "nbd_device": "/dev/nbd4", 00:11:14.886 "bdev_name": "Nvme2n3" 00:11:14.886 }, 00:11:14.886 { 00:11:14.886 "nbd_device": "/dev/nbd5", 00:11:14.886 "bdev_name": "Nvme3n1" 00:11:14.886 } 00:11:14.886 ]' 00:11:14.886 13:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:14.886 13:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:14.886 { 00:11:14.886 "nbd_device": "/dev/nbd0", 00:11:14.886 "bdev_name": "Nvme0n1" 00:11:14.886 }, 00:11:14.886 { 00:11:14.886 "nbd_device": "/dev/nbd1", 00:11:14.886 "bdev_name": "Nvme1n1" 00:11:14.886 }, 00:11:14.886 { 00:11:14.886 "nbd_device": "/dev/nbd2", 00:11:14.886 "bdev_name": "Nvme2n1" 00:11:14.886 }, 00:11:14.886 { 00:11:14.886 "nbd_device": "/dev/nbd3", 00:11:14.886 "bdev_name": "Nvme2n2" 00:11:14.886 }, 00:11:14.886 { 00:11:14.886 "nbd_device": "/dev/nbd4", 00:11:14.886 "bdev_name": "Nvme2n3" 00:11:14.886 }, 00:11:14.886 { 00:11:14.886 "nbd_device": "/dev/nbd5", 00:11:14.886 "bdev_name": "Nvme3n1" 00:11:14.886 } 00:11:14.886 ]' 00:11:14.886 13:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:14.886 13:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:11:14.886 13:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:14.886 13:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:11:14.886 13:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:14.886 13:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:14.886 13:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:14.887 13:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:15.151 13:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:15.151 13:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:15.151 13:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:15.151 13:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:15.151 13:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:15.151 13:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:15.151 13:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:15.151 13:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:15.151 13:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:15.151 13:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:15.409 13:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:15.409 13:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:15.409 13:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:15.409 13:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:15.409 13:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:15.409 13:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:15.409 13:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:15.409 13:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:15.409 13:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:15.409 13:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:15.666 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:15.666 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:15.666 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:15.666 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:15.666 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:15.666 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:15.666 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:15.666 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:15.666 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:15.666 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:15.922 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:15.922 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:15.922 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:15.922 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:15.923 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:15.923 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:15.923 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:15.923 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:15.923 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:15.923 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:16.179 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:16.179 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:16.180 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:16.180 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:16.180 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:16.180 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:16.180 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:16.180 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:16.180 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:16.180 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:16.437 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:16.437 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:16.437 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:16.437 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:16.437 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:16.437 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:16.437 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:16.437 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:16.437 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:16.437 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:16.437 13:52:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:16.694 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:16.694 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:16.694 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:16.694 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:16.694 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:16.694 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:16.694 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:16.694 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:16.694 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:16.694 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:11:16.694 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:16.694 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:11:16.694 13:52:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:16.694 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:16.694 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:16.694 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:16.694 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:16.694 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:16.694 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:16.694 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:16.694 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:16.694 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:16.694 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:16.694 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:16.694 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:11:16.694 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:16.694 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:16.694 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:11:16.952 /dev/nbd0 00:11:16.952 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:16.952 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:16.952 13:52:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:11:16.952 13:52:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:16.952 13:52:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:16.952 13:52:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:16.952 13:52:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:11:16.952 13:52:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:16.952 13:52:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:16.952 13:52:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:16.952 13:52:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:16.952 1+0 records in 00:11:16.952 1+0 records out 00:11:16.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516502 s, 7.9 MB/s 00:11:16.952 13:52:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:16.952 13:52:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:16.952 13:52:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:16.952 13:52:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:16.952 13:52:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:16.952 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:16.952 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:16.952 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:11:17.210 /dev/nbd1 00:11:17.210 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:17.210 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:17.210 13:52:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:11:17.210 13:52:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:17.210 13:52:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:17.210 13:52:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:17.210 13:52:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:11:17.210 13:52:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:17.210 13:52:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:17.210 13:52:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:17.210 13:52:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:17.210 1+0 records in 00:11:17.210 1+0 records out 00:11:17.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509674 s, 8.0 MB/s 00:11:17.210 13:52:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.210 13:52:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:17.210 13:52:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.210 13:52:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:17.210 13:52:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:17.210 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:17.210 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:17.210 13:52:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:11:17.468 /dev/nbd10 00:11:17.468 13:52:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:17.725 13:52:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:17.725 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:11:17.725 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:17.725 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:17.725 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:17.725 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:11:17.725 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:17.725 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:17.725 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:17.725 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:17.725 1+0 records in 00:11:17.725 1+0 records out 00:11:17.725 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000637462 s, 6.4 MB/s 00:11:17.725 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.725 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:17.725 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.725 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:17.725 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:17.725 13:52:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:17.725 13:52:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:17.725 13:52:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:11:17.725 /dev/nbd11 00:11:17.983 13:52:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:17.983 13:52:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:17.983 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:11:17.983 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:17.983 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:17.983 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:17.983 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:11:17.983 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:17.983 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:17.983 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:17.983 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:17.983 1+0 records in 00:11:17.983 1+0 records out 00:11:17.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000707816 s, 5.8 MB/s 00:11:17.983 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.983 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:17.983 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.983 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:17.983 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:17.983 13:52:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:17.983 13:52:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:17.983 13:52:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:11:18.243 /dev/nbd12 00:11:18.243 13:52:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:18.243 13:52:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:18.243 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:11:18.243 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:18.243 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:18.243 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:18.243 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:11:18.243 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:18.243 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:18.243 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:18.243 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:18.243 1+0 records in 00:11:18.243 1+0 records out 00:11:18.243 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535852 s, 7.6 MB/s 00:11:18.243 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.243 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:18.243 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.243 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:18.243 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:18.243 13:52:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:18.243 13:52:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:18.243 13:52:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:11:18.501 /dev/nbd13 00:11:18.501 13:52:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:18.501 13:52:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:18.501 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:11:18.501 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:18.501 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:18.501 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:18.501 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:11:18.501 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:18.501 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:18.501 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:18.501 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:18.501 1+0 records in 00:11:18.501 1+0 records out 00:11:18.501 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000738759 s, 5.5 MB/s 00:11:18.501 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.501 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:18.501 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.501 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:18.501 13:52:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:18.501 13:52:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:18.501 13:52:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:18.501 13:52:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:18.501 13:52:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:18.501 13:52:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:18.760 13:52:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:18.760 { 00:11:18.760 "nbd_device": "/dev/nbd0", 00:11:18.760 "bdev_name": "Nvme0n1" 00:11:18.760 }, 00:11:18.760 { 00:11:18.760 "nbd_device": "/dev/nbd1", 00:11:18.760 "bdev_name": "Nvme1n1" 00:11:18.760 }, 00:11:18.760 { 00:11:18.760 "nbd_device": "/dev/nbd10", 00:11:18.760 "bdev_name": "Nvme2n1" 00:11:18.760 }, 00:11:18.760 { 00:11:18.760 "nbd_device": "/dev/nbd11", 00:11:18.760 "bdev_name": "Nvme2n2" 00:11:18.760 }, 00:11:18.760 { 00:11:18.760 "nbd_device": "/dev/nbd12", 00:11:18.760 "bdev_name": "Nvme2n3" 00:11:18.760 }, 00:11:18.760 { 00:11:18.760 "nbd_device": "/dev/nbd13", 00:11:18.760 "bdev_name": "Nvme3n1" 00:11:18.760 } 00:11:18.760 ]' 00:11:18.760 13:52:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:18.760 { 00:11:18.760 "nbd_device": "/dev/nbd0", 00:11:18.760 "bdev_name": "Nvme0n1" 00:11:18.760 }, 00:11:18.760 { 00:11:18.760 "nbd_device": "/dev/nbd1", 00:11:18.760 "bdev_name": "Nvme1n1" 00:11:18.760 }, 00:11:18.760 { 00:11:18.760 "nbd_device": "/dev/nbd10", 00:11:18.760 "bdev_name": "Nvme2n1" 00:11:18.760 }, 00:11:18.760 { 00:11:18.760 "nbd_device": "/dev/nbd11", 00:11:18.760 "bdev_name": "Nvme2n2" 00:11:18.760 }, 00:11:18.760 { 00:11:18.760 "nbd_device": "/dev/nbd12", 00:11:18.760 "bdev_name": "Nvme2n3" 00:11:18.760 }, 00:11:18.760 { 00:11:18.760 "nbd_device": "/dev/nbd13", 00:11:18.760 "bdev_name": "Nvme3n1" 00:11:18.760 } 00:11:18.760 ]' 00:11:18.760 13:52:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:18.760 13:52:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:18.760 /dev/nbd1 00:11:18.760 /dev/nbd10 00:11:18.760 /dev/nbd11 00:11:18.760 /dev/nbd12 00:11:18.760 /dev/nbd13' 00:11:18.760 13:52:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:18.760 13:52:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:18.760 /dev/nbd1 00:11:18.760 /dev/nbd10 00:11:18.760 /dev/nbd11 00:11:18.760 /dev/nbd12 00:11:18.760 /dev/nbd13' 00:11:18.760 13:52:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:11:18.760 13:52:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:11:18.760 13:52:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:11:18.760 13:52:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:11:18.760 13:52:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:11:18.760 13:52:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:18.760 13:52:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:18.760 13:52:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:18.760 13:52:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:18.760 13:52:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:18.760 13:52:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:18.760 256+0 records in 00:11:18.760 256+0 records out 00:11:18.760 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00824078 s, 127 MB/s 00:11:18.760 13:52:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:18.760 13:52:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:19.017 256+0 records in 00:11:19.017 256+0 records out 00:11:19.017 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137219 s, 7.6 MB/s 00:11:19.017 13:52:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:19.017 13:52:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:19.017 256+0 records in 00:11:19.017 256+0 records out 00:11:19.017 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124095 s, 8.4 MB/s 00:11:19.017 13:52:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:19.017 13:52:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:19.275 256+0 records in 00:11:19.275 256+0 records out 00:11:19.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132315 s, 7.9 MB/s 00:11:19.275 13:52:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:19.275 13:52:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:19.275 256+0 records in 00:11:19.275 256+0 records out 00:11:19.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119044 s, 8.8 MB/s 00:11:19.275 13:52:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:19.275 13:52:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:19.544 256+0 records in 00:11:19.544 256+0 records out 00:11:19.544 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130404 s, 8.0 MB/s 00:11:19.544 13:52:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:19.544 13:52:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:19.544 256+0 records in 00:11:19.544 256+0 records out 00:11:19.544 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130212 s, 8.1 MB/s 00:11:19.544 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:11:19.544 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:19.544 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:19.544 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:19.544 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:19.544 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:19.544 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:19.544 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:19.544 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:19.803 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:19.803 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:19.803 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:19.803 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:19.803 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:19.803 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:19.803 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:19.803 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:19.803 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:19.803 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:19.803 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:19.803 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:19.803 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:19.803 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:19.803 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:19.803 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:19.803 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:19.803 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:20.061 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:20.061 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:20.061 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:20.061 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:20.061 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:20.061 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:20.061 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:20.061 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:20.061 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:20.061 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:20.319 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:20.319 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:20.319 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:20.319 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:20.319 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:20.319 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:20.319 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:20.319 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:20.319 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:20.319 13:52:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:20.577 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:20.577 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:20.577 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:20.577 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:20.577 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:20.577 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:20.577 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:20.577 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:20.577 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:20.577 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:20.835 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:20.835 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:20.835 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:20.835 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:20.835 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:20.835 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:20.835 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:20.835 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:20.835 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:20.835 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:21.145 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:21.145 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:21.145 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:21.145 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:21.145 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:21.145 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:21.145 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:21.145 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:21.145 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:21.145 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:21.418 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:21.418 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:21.418 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:21.418 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:21.418 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:21.418 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:21.418 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:21.418 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:21.418 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:21.418 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:21.418 13:52:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:21.674 13:52:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:21.674 13:52:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:21.674 13:52:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:21.931 13:52:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:21.931 13:52:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:21.931 13:52:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:21.931 13:52:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:21.931 13:52:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:21.931 13:52:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:21.931 13:52:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:11:21.931 13:52:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:21.932 13:52:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:11:21.932 13:52:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:21.932 13:52:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:21.932 13:52:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:21.932 13:52:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:11:21.932 13:52:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:11:21.932 13:52:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:22.189 malloc_lvol_verify 00:11:22.189 13:52:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:22.447 607d153c-f436-44f2-adbc-dfc504597fb4 00:11:22.447 13:52:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:22.706 f109e07e-dd95-4573-b0ed-15ee0b6b23a7 00:11:22.706 13:52:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:22.964 /dev/nbd0 00:11:22.964 13:52:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:11:22.964 mke2fs 1.46.5 (30-Dec-2021) 00:11:22.964 Discarding device blocks: 0/4096 done 00:11:22.964 Creating filesystem with 4096 1k blocks and 1024 inodes 00:11:22.964 00:11:22.964 Allocating group tables: 0/1 done 00:11:22.964 Writing inode tables: 0/1 done 00:11:22.964 Creating journal (1024 blocks): done 00:11:22.964 Writing superblocks and filesystem accounting information: 0/1 done 00:11:22.964 00:11:22.964 13:52:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:11:22.964 13:52:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:22.964 13:52:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:22.964 13:52:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:22.964 13:52:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:22.964 13:52:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:22.964 13:52:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:22.964 13:52:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:23.223 13:52:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:23.223 13:52:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:23.223 13:52:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:23.223 13:52:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.223 13:52:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.223 13:52:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:23.223 13:52:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:23.223 13:52:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.223 13:52:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:11:23.223 13:52:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:11:23.223 13:52:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 66900 00:11:23.223 13:52:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 66900 ']' 00:11:23.223 13:52:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 66900 00:11:23.223 13:52:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:11:23.223 13:52:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:23.223 13:52:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66900 00:11:23.223 13:52:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:23.223 killing process with pid 66900 00:11:23.223 13:52:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:23.223 13:52:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66900' 00:11:23.223 13:52:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 66900 00:11:23.223 13:52:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 66900 00:11:24.597 13:52:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:11:24.597 00:11:24.597 real 0m13.126s 00:11:24.597 user 0m18.867s 00:11:24.597 sys 0m4.069s 00:11:24.597 ************************************ 00:11:24.597 END TEST bdev_nbd 00:11:24.597 ************************************ 00:11:24.597 13:52:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:24.597 13:52:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:24.597 13:52:48 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:24.597 skipping fio tests on NVMe due to multi-ns failures. 00:11:24.597 13:52:48 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:11:24.597 13:52:48 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:11:24.597 13:52:48 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:11:24.597 13:52:48 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:24.597 13:52:48 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:24.597 13:52:48 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:11:24.597 13:52:48 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:24.597 13:52:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:24.597 ************************************ 00:11:24.597 START TEST bdev_verify 00:11:24.597 ************************************ 00:11:24.597 13:52:48 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:24.597 [2024-07-15 13:52:48.990189] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:11:24.597 [2024-07-15 13:52:48.990364] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67313 ] 00:11:24.854 [2024-07-15 13:52:49.155529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:24.854 [2024-07-15 13:52:49.359456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.854 [2024-07-15 13:52:49.359467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.786 Running I/O for 5 seconds... 00:11:31.050 00:11:31.050 Latency(us) 00:11:31.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:31.050 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:31.050 Verification LBA range: start 0x0 length 0xbd0bd 00:11:31.050 Nvme0n1 : 5.09 1507.92 5.89 0.00 0.00 84666.90 15847.80 82456.20 00:11:31.050 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:31.050 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:11:31.050 Nvme0n1 : 5.08 1474.39 5.76 0.00 0.00 86347.83 12094.37 91512.09 00:11:31.050 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:31.050 Verification LBA range: start 0x0 length 0xa0000 00:11:31.050 Nvme1n1 : 5.09 1507.45 5.89 0.00 0.00 84481.50 15490.33 71970.44 00:11:31.050 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:31.050 Verification LBA range: start 0xa0000 length 0xa0000 00:11:31.050 Nvme1n1 : 5.08 1473.81 5.76 0.00 0.00 86215.95 12034.79 89605.59 00:11:31.050 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:31.050 Verification LBA range: start 0x0 length 0x80000 00:11:31.050 Nvme2n1 : 5.10 1506.93 5.89 0.00 0.00 84305.57 15371.17 68157.44 00:11:31.050 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:31.050 Verification LBA range: start 0x80000 length 0x80000 00:11:31.050 Nvme2n1 : 5.10 1481.84 5.79 0.00 0.00 85850.31 11021.96 84362.71 00:11:31.050 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:31.050 Verification LBA range: start 0x0 length 0x80000 00:11:31.050 Nvme2n2 : 5.10 1505.75 5.88 0.00 0.00 84134.11 17754.30 70540.57 00:11:31.050 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:31.050 Verification LBA range: start 0x80000 length 0x80000 00:11:31.050 Nvme2n2 : 5.10 1480.63 5.78 0.00 0.00 85728.53 12988.04 80549.70 00:11:31.050 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:31.050 Verification LBA range: start 0x0 length 0x80000 00:11:31.050 Nvme2n3 : 5.10 1504.66 5.88 0.00 0.00 83967.88 16443.58 73876.95 00:11:31.050 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:31.050 Verification LBA range: start 0x80000 length 0x80000 00:11:31.051 Nvme2n3 : 5.10 1479.50 5.78 0.00 0.00 85608.90 14954.12 83886.08 00:11:31.051 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:31.051 Verification LBA range: start 0x0 length 0x20000 00:11:31.051 Nvme3n1 : 5.11 1503.74 5.87 0.00 0.00 83866.86 11617.75 78166.57 00:11:31.051 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:31.051 Verification LBA range: start 0x20000 length 0x20000 00:11:31.051 Nvme3n1 : 5.11 1478.54 5.78 0.00 0.00 85497.29 8877.15 89605.59 00:11:31.051 =================================================================================================================== 00:11:31.051 Total : 17905.16 69.94 0.00 0.00 85047.36 8877.15 91512.09 00:11:32.424 00:11:32.424 real 0m7.684s 00:11:32.424 user 0m14.019s 00:11:32.424 sys 0m0.240s 00:11:32.424 13:52:56 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:32.424 13:52:56 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:11:32.424 ************************************ 00:11:32.424 END TEST bdev_verify 00:11:32.424 ************************************ 00:11:32.424 13:52:56 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:32.424 13:52:56 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:32.424 13:52:56 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:11:32.424 13:52:56 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.424 13:52:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:32.424 ************************************ 00:11:32.424 START TEST bdev_verify_big_io 00:11:32.424 ************************************ 00:11:32.424 13:52:56 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:32.424 [2024-07-15 13:52:56.719032] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:11:32.424 [2024-07-15 13:52:56.719182] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67410 ] 00:11:32.424 [2024-07-15 13:52:56.885936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:32.682 [2024-07-15 13:52:57.116921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.682 [2024-07-15 13:52:57.116942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.615 Running I/O for 5 seconds... 00:11:40.193 00:11:40.194 Latency(us) 00:11:40.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:40.194 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:40.194 Verification LBA range: start 0x0 length 0xbd0b 00:11:40.194 Nvme0n1 : 5.68 112.64 7.04 0.00 0.00 1098986.87 22997.18 1044763.00 00:11:40.194 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:40.194 Verification LBA range: start 0xbd0b length 0xbd0b 00:11:40.194 Nvme0n1 : 5.65 121.78 7.61 0.00 0.00 1012057.35 12988.04 1090519.04 00:11:40.194 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:40.194 Verification LBA range: start 0x0 length 0xa000 00:11:40.194 Nvme1n1 : 5.68 112.58 7.04 0.00 0.00 1067452.23 84839.33 968502.92 00:11:40.194 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:40.194 Verification LBA range: start 0xa000 length 0xa000 00:11:40.194 Nvme1n1 : 5.87 118.07 7.38 0.00 0.00 1008417.21 55288.55 1593835.52 00:11:40.194 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:40.194 Verification LBA range: start 0x0 length 0x8000 00:11:40.194 Nvme2n1 : 5.78 115.67 7.23 0.00 0.00 1009410.66 86745.83 949437.91 00:11:40.194 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:40.194 Verification LBA range: start 0x8000 length 0x8000 00:11:40.194 Nvme2n1 : 5.87 117.70 7.36 0.00 0.00 971370.61 77213.32 1624339.55 00:11:40.194 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:40.194 Verification LBA range: start 0x0 length 0x8000 00:11:40.194 Nvme2n2 : 5.83 120.85 7.55 0.00 0.00 940287.07 46709.29 991380.95 00:11:40.194 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:40.194 Verification LBA range: start 0x8000 length 0x8000 00:11:40.194 Nvme2n2 : 5.93 125.75 7.86 0.00 0.00 891778.68 57909.99 1654843.58 00:11:40.194 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:40.194 Verification LBA range: start 0x0 length 0x8000 00:11:40.194 Nvme2n3 : 5.93 123.42 7.71 0.00 0.00 885813.94 45994.36 1258291.20 00:11:40.194 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:40.194 Verification LBA range: start 0x8000 length 0x8000 00:11:40.194 Nvme2n3 : 5.95 131.64 8.23 0.00 0.00 825029.80 12392.26 1692973.61 00:11:40.194 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:40.194 Verification LBA range: start 0x0 length 0x2000 00:11:40.194 Nvme3n1 : 5.94 139.41 8.71 0.00 0.00 769856.35 1437.32 1075267.03 00:11:40.194 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:40.194 Verification LBA range: start 0x2000 length 0x2000 00:11:40.194 Nvme3n1 : 6.01 158.04 9.88 0.00 0.00 668829.70 1139.43 1723477.64 00:11:40.194 =================================================================================================================== 00:11:40.194 Total : 1497.55 93.60 0.00 0.00 915397.17 1139.43 1723477.64 00:11:41.575 00:11:41.575 real 0m9.126s 00:11:41.575 user 0m16.863s 00:11:41.575 sys 0m0.265s 00:11:41.575 13:53:05 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:41.576 13:53:05 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:11:41.576 ************************************ 00:11:41.576 END TEST bdev_verify_big_io 00:11:41.576 ************************************ 00:11:41.576 13:53:05 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:41.576 13:53:05 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:41.576 13:53:05 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:11:41.576 13:53:05 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:41.576 13:53:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:41.576 ************************************ 00:11:41.576 START TEST bdev_write_zeroes 00:11:41.576 ************************************ 00:11:41.576 13:53:05 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:41.576 [2024-07-15 13:53:05.903800] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:11:41.576 [2024-07-15 13:53:05.903978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67526 ] 00:11:41.576 [2024-07-15 13:53:06.077487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.833 [2024-07-15 13:53:06.304515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.400 Running I/O for 1 seconds... 00:11:43.771 00:11:43.771 Latency(us) 00:11:43.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:43.771 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:43.771 Nvme0n1 : 1.03 6379.33 24.92 0.00 0.00 19967.98 11677.32 36461.85 00:11:43.771 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:43.771 Nvme1n1 : 1.04 6351.70 24.81 0.00 0.00 20000.20 12153.95 36700.16 00:11:43.771 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:43.771 Nvme2n1 : 1.04 6324.65 24.71 0.00 0.00 20005.39 11617.75 36461.85 00:11:43.771 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:43.771 Nvme2n2 : 1.05 6300.01 24.61 0.00 0.00 19982.79 10604.92 36223.53 00:11:43.771 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:43.771 Nvme2n3 : 1.05 6277.34 24.52 0.00 0.00 19999.79 9472.93 35985.22 00:11:43.771 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:43.771 Nvme3n1 : 1.05 6252.98 24.43 0.00 0.00 20019.84 9413.35 36700.16 00:11:43.771 =================================================================================================================== 00:11:43.772 Total : 37886.02 147.99 0.00 0.00 19996.00 9413.35 36700.16 00:11:44.704 ************************************ 00:11:44.704 END TEST bdev_write_zeroes 00:11:44.704 ************************************ 00:11:44.704 00:11:44.704 real 0m3.390s 00:11:44.704 user 0m3.038s 00:11:44.704 sys 0m0.224s 00:11:44.704 13:53:09 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:44.704 13:53:09 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:11:44.704 13:53:09 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:44.704 13:53:09 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:44.704 13:53:09 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:11:44.704 13:53:09 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:44.704 13:53:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:44.704 ************************************ 00:11:44.704 START TEST bdev_json_nonenclosed 00:11:44.704 ************************************ 00:11:44.705 13:53:09 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:44.963 [2024-07-15 13:53:09.336814] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:11:44.963 [2024-07-15 13:53:09.337003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67585 ] 00:11:45.221 [2024-07-15 13:53:09.507127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.221 [2024-07-15 13:53:09.734858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.221 [2024-07-15 13:53:09.735002] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:45.221 [2024-07-15 13:53:09.735035] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:45.221 [2024-07-15 13:53:09.735056] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:45.787 00:11:45.787 real 0m0.921s 00:11:45.787 user 0m0.696s 00:11:45.787 sys 0m0.118s 00:11:45.787 13:53:10 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:11:45.787 13:53:10 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:45.787 13:53:10 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:11:45.787 ************************************ 00:11:45.787 END TEST bdev_json_nonenclosed 00:11:45.787 ************************************ 00:11:45.787 13:53:10 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:11:45.787 13:53:10 blockdev_nvme -- bdev/blockdev.sh@782 -- # true 00:11:45.787 13:53:10 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:45.787 13:53:10 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:11:45.787 13:53:10 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:45.787 13:53:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:45.787 ************************************ 00:11:45.787 START TEST bdev_json_nonarray 00:11:45.787 ************************************ 00:11:45.787 13:53:10 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:45.787 [2024-07-15 13:53:10.308421] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:11:45.787 [2024-07-15 13:53:10.308632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67616 ] 00:11:46.045 [2024-07-15 13:53:10.480417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.303 [2024-07-15 13:53:10.677215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.303 [2024-07-15 13:53:10.677350] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:46.303 [2024-07-15 13:53:10.677379] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:46.303 [2024-07-15 13:53:10.677396] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:46.602 00:11:46.602 real 0m0.899s 00:11:46.602 user 0m0.654s 00:11:46.602 sys 0m0.137s 00:11:46.602 13:53:11 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:11:46.602 13:53:11 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:46.602 ************************************ 00:11:46.602 END TEST bdev_json_nonarray 00:11:46.602 13:53:11 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:11:46.602 ************************************ 00:11:46.602 13:53:11 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:11:46.602 13:53:11 blockdev_nvme -- bdev/blockdev.sh@785 -- # true 00:11:46.602 13:53:11 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:11:46.602 13:53:11 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:11:46.602 13:53:11 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:11:46.602 13:53:11 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:11:46.602 13:53:11 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:11:46.602 13:53:11 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:11:46.602 13:53:11 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:46.602 13:53:11 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:11:46.602 13:53:11 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:11:46.602 13:53:11 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:11:46.602 13:53:11 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:11:46.602 00:11:46.602 real 0m44.321s 00:11:46.602 user 1m7.156s 00:11:46.602 sys 0m6.357s 00:11:46.602 13:53:11 blockdev_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:46.602 ************************************ 00:11:46.602 END TEST blockdev_nvme 00:11:46.602 13:53:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:46.602 ************************************ 00:11:46.861 13:53:11 -- common/autotest_common.sh@1142 -- # return 0 00:11:46.861 13:53:11 -- spdk/autotest.sh@213 -- # uname -s 00:11:46.861 13:53:11 -- spdk/autotest.sh@213 -- # [[ Linux == Linux ]] 00:11:46.861 13:53:11 -- spdk/autotest.sh@214 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:11:46.861 13:53:11 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:46.861 13:53:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:46.861 13:53:11 -- common/autotest_common.sh@10 -- # set +x 00:11:46.861 ************************************ 00:11:46.861 START TEST blockdev_nvme_gpt 00:11:46.861 ************************************ 00:11:46.861 13:53:11 blockdev_nvme_gpt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:11:46.861 * Looking for test storage... 00:11:46.861 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:46.861 13:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:46.861 13:53:11 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:11:46.861 13:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:46.861 13:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:46.861 13:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:46.861 13:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:46.861 13:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:11:46.861 13:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:11:46.861 13:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:11:46.861 13:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:11:46.861 13:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:11:46.861 13:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:11:46.861 13:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # uname -s 00:11:46.861 13:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:11:46.861 13:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:11:46.861 13:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # test_type=gpt 00:11:46.861 13:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # crypto_device= 00:11:46.861 13:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # dek= 00:11:46.861 13:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # env_ctx= 00:11:46.861 13:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:11:46.861 13:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:11:46.861 13:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:11:46.861 13:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:11:46.861 13:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:11:46.861 13:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=67692 00:11:46.861 13:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:46.861 13:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:46.861 13:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 67692 00:11:46.861 13:53:11 blockdev_nvme_gpt -- common/autotest_common.sh@829 -- # '[' -z 67692 ']' 00:11:46.861 13:53:11 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.861 13:53:11 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:46.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.861 13:53:11 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.861 13:53:11 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:46.861 13:53:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:46.861 [2024-07-15 13:53:11.360513] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:11:46.861 [2024-07-15 13:53:11.360675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67692 ] 00:11:47.120 [2024-07-15 13:53:11.520087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.378 [2024-07-15 13:53:11.707894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.944 13:53:12 blockdev_nvme_gpt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:47.944 13:53:12 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # return 0 00:11:47.944 13:53:12 blockdev_nvme_gpt -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:11:47.944 13:53:12 blockdev_nvme_gpt -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:11:47.944 13:53:12 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:48.202 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:48.461 Waiting for block devices as requested 00:11:48.461 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:48.719 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:48.719 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:48.719 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:53.980 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:53.980 13:53:18 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # local nvme bdf 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:11:53.980 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:53.980 13:53:18 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:10.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:11.0/nvme/nvme0/nvme0n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n2' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n3' '/sys/bus/pci/drivers/nvme/0000:00:13.0/nvme/nvme3/nvme3c3n1') 00:11:53.980 13:53:18 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:11:53.980 13:53:18 blockdev_nvme_gpt -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:11:53.980 13:53:18 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:11:53.980 13:53:18 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:11:53.981 13:53:18 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # dev=/dev/nvme1n1 00:11:53.981 13:53:18 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # parted /dev/nvme1n1 -ms print 00:11:53.981 13:53:18 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme1n1: unrecognised disk label 00:11:53.981 BYT; 00:11:53.981 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:11:53.981 13:53:18 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme1n1: unrecognised disk label 00:11:53.981 BYT; 00:11:53.981 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\1\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:11:53.981 13:53:18 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme1n1 00:11:53.981 13:53:18 blockdev_nvme_gpt -- bdev/blockdev.sh@116 -- # break 00:11:53.981 13:53:18 blockdev_nvme_gpt -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme1n1 ]] 00:11:53.981 13:53:18 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:11:53.981 13:53:18 blockdev_nvme_gpt -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:11:53.981 13:53:18 blockdev_nvme_gpt -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme1n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:11:53.981 13:53:18 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:11:53.981 13:53:18 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:11:53.981 13:53:18 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:11:53.981 13:53:18 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:53.981 13:53:18 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:11:53.981 13:53:18 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:11:53.981 13:53:18 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:53.981 13:53:18 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:11:53.981 13:53:18 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:11:53.981 13:53:18 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:11:53.981 13:53:18 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:11:53.981 13:53:18 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:11:53.981 13:53:18 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:11:53.981 13:53:18 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:11:53.981 13:53:18 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:53.981 13:53:18 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:11:53.981 13:53:18 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:11:53.981 13:53:18 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:53.981 13:53:18 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:11:53.981 13:53:18 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:11:53.981 13:53:18 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:11:53.981 13:53:18 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:11:53.981 13:53:18 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme1n1 00:11:54.915 The operation has completed successfully. 00:11:54.915 13:53:19 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme1n1 00:11:55.848 The operation has completed successfully. 00:11:55.848 13:53:20 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:56.414 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:56.981 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:56.981 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:56.981 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:56.981 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:57.239 13:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:11:57.239 13:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.239 13:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:57.239 [] 00:11:57.239 13:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.239 13:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:11:57.239 13:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:11:57.239 13:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:11:57.239 13:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:57.239 13:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:11:57.239 13:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.239 13:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:57.497 13:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.497 13:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:11:57.497 13:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.497 13:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:57.497 13:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.497 13:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # cat 00:11:57.497 13:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:11:57.497 13:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.497 13:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:57.497 13:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.497 13:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:11:57.497 13:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.497 13:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:57.497 13:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.497 13:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:57.497 13:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.497 13:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:57.497 13:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.497 13:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:11:57.497 13:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:11:57.497 13:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:11:57.497 13:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.497 13:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:57.756 13:53:22 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.756 13:53:22 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:11:57.756 13:53:22 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # jq -r .name 00:11:57.756 13:53:22 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "6b87240b-136c-4a68-9611-8fe2f050df10"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6b87240b-136c-4a68-9611-8fe2f050df10",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "4a21aab1-1cb9-4bf5-b8af-39cdc7ab9835"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4a21aab1-1cb9-4bf5-b8af-39cdc7ab9835",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "82da360a-2fce-4b2b-9544-298036c8a2e3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "82da360a-2fce-4b2b-9544-298036c8a2e3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "b7db2be8-3859-405c-9c6a-0eeed9ff8140"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b7db2be8-3859-405c-9c6a-0eeed9ff8140",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "30b9ebf0-4bcf-4b57-93d2-3697566aedfd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "30b9ebf0-4bcf-4b57-93d2-3697566aedfd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:11:57.756 13:53:22 blockdev_nvme_gpt -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:11:57.756 13:53:22 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:11:57.756 13:53:22 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:11:57.756 13:53:22 blockdev_nvme_gpt -- bdev/blockdev.sh@754 -- # killprocess 67692 00:11:57.756 13:53:22 blockdev_nvme_gpt -- common/autotest_common.sh@948 -- # '[' -z 67692 ']' 00:11:57.756 13:53:22 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # kill -0 67692 00:11:57.757 13:53:22 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # uname 00:11:57.757 13:53:22 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:57.757 13:53:22 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67692 00:11:57.757 13:53:22 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:57.757 killing process with pid 67692 00:11:57.757 13:53:22 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:57.757 13:53:22 blockdev_nvme_gpt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67692' 00:11:57.757 13:53:22 blockdev_nvme_gpt -- common/autotest_common.sh@967 -- # kill 67692 00:11:57.757 13:53:22 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # wait 67692 00:12:00.286 13:53:24 blockdev_nvme_gpt -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:00.286 13:53:24 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:12:00.286 13:53:24 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:00.286 13:53:24 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:00.286 13:53:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:00.286 ************************************ 00:12:00.286 START TEST bdev_hello_world 00:12:00.286 ************************************ 00:12:00.286 13:53:24 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:12:00.286 [2024-07-15 13:53:24.409092] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:12:00.286 [2024-07-15 13:53:24.409282] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68319 ] 00:12:00.286 [2024-07-15 13:53:24.583610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.286 [2024-07-15 13:53:24.774532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.222 [2024-07-15 13:53:25.425027] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:01.222 [2024-07-15 13:53:25.425096] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:12:01.222 [2024-07-15 13:53:25.425131] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:01.222 [2024-07-15 13:53:25.428176] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:01.222 [2024-07-15 13:53:25.428800] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:01.222 [2024-07-15 13:53:25.428843] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:01.222 [2024-07-15 13:53:25.429070] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:01.222 00:12:01.222 [2024-07-15 13:53:25.429117] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:02.157 00:12:02.157 real 0m2.264s 00:12:02.157 user 0m1.932s 00:12:02.157 sys 0m0.217s 00:12:02.157 13:53:26 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:02.157 13:53:26 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:02.157 ************************************ 00:12:02.157 END TEST bdev_hello_world 00:12:02.157 ************************************ 00:12:02.157 13:53:26 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:12:02.157 13:53:26 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:12:02.157 13:53:26 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:02.157 13:53:26 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:02.157 13:53:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:02.157 ************************************ 00:12:02.157 START TEST bdev_bounds 00:12:02.157 ************************************ 00:12:02.157 13:53:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:12:02.157 13:53:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:02.157 13:53:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=68367 00:12:02.157 13:53:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:02.157 Process bdevio pid: 68367 00:12:02.157 13:53:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 68367' 00:12:02.157 13:53:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 68367 00:12:02.157 13:53:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 68367 ']' 00:12:02.157 13:53:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.157 13:53:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:02.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.157 13:53:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.157 13:53:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:02.157 13:53:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:02.415 [2024-07-15 13:53:26.709015] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:12:02.415 [2024-07-15 13:53:26.709168] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68367 ] 00:12:02.415 [2024-07-15 13:53:26.874509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:02.673 [2024-07-15 13:53:27.109256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.673 [2024-07-15 13:53:27.109354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.673 [2024-07-15 13:53:27.109360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:03.239 13:53:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:03.239 13:53:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:12:03.239 13:53:27 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:03.498 I/O targets: 00:12:03.498 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:12:03.498 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:12:03.498 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:03.498 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:03.498 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:03.498 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:03.498 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:03.498 00:12:03.498 00:12:03.498 CUnit - A unit testing framework for C - Version 2.1-3 00:12:03.498 http://cunit.sourceforge.net/ 00:12:03.498 00:12:03.498 00:12:03.498 Suite: bdevio tests on: Nvme3n1 00:12:03.498 Test: blockdev write read block ...passed 00:12:03.498 Test: blockdev write zeroes read block ...passed 00:12:03.498 Test: blockdev write zeroes read no split ...passed 00:12:03.498 Test: blockdev write zeroes read split ...passed 00:12:03.498 Test: blockdev write zeroes read split partial ...passed 00:12:03.498 Test: blockdev reset ...[2024-07-15 13:53:27.959976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:12:03.498 [2024-07-15 13:53:27.963777] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:03.498 passed 00:12:03.498 Test: blockdev write read 8 blocks ...passed 00:12:03.498 Test: blockdev write read size > 128k ...passed 00:12:03.498 Test: blockdev write read invalid size ...passed 00:12:03.498 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:03.498 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:03.498 Test: blockdev write read max offset ...passed 00:12:03.498 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:03.498 Test: blockdev writev readv 8 blocks ...passed 00:12:03.498 Test: blockdev writev readv 30 x 1block ...passed 00:12:03.498 Test: blockdev writev readv block ...passed 00:12:03.498 Test: blockdev writev readv size > 128k ...passed 00:12:03.498 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:03.498 Test: blockdev comparev and writev ...[2024-07-15 13:53:27.971801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26a604000 len:0x1000 00:12:03.498 [2024-07-15 13:53:27.971867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:03.498 passed 00:12:03.498 Test: blockdev nvme passthru rw ...passed 00:12:03.498 Test: blockdev nvme passthru vendor specific ...[2024-07-15 13:53:27.972780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:03.498 passed 00:12:03.498 Test: blockdev nvme admin passthru ...[2024-07-15 13:53:27.972838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:03.498 passed 00:12:03.498 Test: blockdev copy ...passed 00:12:03.498 Suite: bdevio tests on: Nvme2n3 00:12:03.498 Test: blockdev write read block ...passed 00:12:03.498 Test: blockdev write zeroes read block ...passed 00:12:03.498 Test: blockdev write zeroes read no split ...passed 00:12:03.498 Test: blockdev write zeroes read split ...passed 00:12:03.498 Test: blockdev write zeroes read split partial ...passed 00:12:03.498 Test: blockdev reset ...[2024-07-15 13:53:28.038718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:12:03.757 [2024-07-15 13:53:28.043964] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:03.757 passed 00:12:03.757 Test: blockdev write read 8 blocks ...passed 00:12:03.757 Test: blockdev write read size > 128k ...passed 00:12:03.757 Test: blockdev write read invalid size ...passed 00:12:03.757 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:03.757 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:03.757 Test: blockdev write read max offset ...passed 00:12:03.757 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:03.757 Test: blockdev writev readv 8 blocks ...passed 00:12:03.757 Test: blockdev writev readv 30 x 1block ...passed 00:12:03.757 Test: blockdev writev readv block ...passed 00:12:03.757 Test: blockdev writev readv size > 128k ...passed 00:12:03.757 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:03.757 Test: blockdev comparev and writev ...[2024-07-15 13:53:28.051961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x283a3a000 len:0x1000 00:12:03.757 [2024-07-15 13:53:28.052040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:03.757 passed 00:12:03.757 Test: blockdev nvme passthru rw ...passed 00:12:03.757 Test: blockdev nvme passthru vendor specific ...[2024-07-15 13:53:28.053035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:03.757 [2024-07-15 13:53:28.053081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:03.757 passed 00:12:03.757 Test: blockdev nvme admin passthru ...passed 00:12:03.757 Test: blockdev copy ...passed 00:12:03.757 Suite: bdevio tests on: Nvme2n2 00:12:03.757 Test: blockdev write read block ...passed 00:12:03.757 Test: blockdev write zeroes read block ...passed 00:12:03.757 Test: blockdev write zeroes read no split ...passed 00:12:03.757 Test: blockdev write zeroes read split ...passed 00:12:03.757 Test: blockdev write zeroes read split partial ...passed 00:12:03.757 Test: blockdev reset ...[2024-07-15 13:53:28.125129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:12:03.757 [2024-07-15 13:53:28.129759] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:03.757 passed 00:12:03.757 Test: blockdev write read 8 blocks ...passed 00:12:03.757 Test: blockdev write read size > 128k ...passed 00:12:03.757 Test: blockdev write read invalid size ...passed 00:12:03.757 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:03.757 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:03.757 Test: blockdev write read max offset ...passed 00:12:03.757 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:03.757 Test: blockdev writev readv 8 blocks ...passed 00:12:03.757 Test: blockdev writev readv 30 x 1block ...passed 00:12:03.757 Test: blockdev writev readv block ...passed 00:12:03.757 Test: blockdev writev readv size > 128k ...passed 00:12:03.757 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:03.757 Test: blockdev comparev and writev ...[2024-07-15 13:53:28.138648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x283a36000 len:0x1000 00:12:03.757 [2024-07-15 13:53:28.138718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:03.757 passed 00:12:03.757 Test: blockdev nvme passthru rw ...passed 00:12:03.757 Test: blockdev nvme passthru vendor specific ...[2024-07-15 13:53:28.139669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:03.757 [2024-07-15 13:53:28.139724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:03.757 passed 00:12:03.757 Test: blockdev nvme admin passthru ...passed 00:12:03.757 Test: blockdev copy ...passed 00:12:03.757 Suite: bdevio tests on: Nvme2n1 00:12:03.758 Test: blockdev write read block ...passed 00:12:03.758 Test: blockdev write zeroes read block ...passed 00:12:03.758 Test: blockdev write zeroes read no split ...passed 00:12:03.758 Test: blockdev write zeroes read split ...passed 00:12:03.758 Test: blockdev write zeroes read split partial ...passed 00:12:03.758 Test: blockdev reset ...[2024-07-15 13:53:28.214717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:12:03.758 [2024-07-15 13:53:28.219192] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:03.758 passed 00:12:03.758 Test: blockdev write read 8 blocks ...passed 00:12:03.758 Test: blockdev write read size > 128k ...passed 00:12:03.758 Test: blockdev write read invalid size ...passed 00:12:03.758 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:03.758 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:03.758 Test: blockdev write read max offset ...passed 00:12:03.758 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:03.758 Test: blockdev writev readv 8 blocks ...passed 00:12:03.758 Test: blockdev writev readv 30 x 1block ...passed 00:12:03.758 Test: blockdev writev readv block ...passed 00:12:03.758 Test: blockdev writev readv size > 128k ...passed 00:12:03.758 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:03.758 Test: blockdev comparev and writev ...[2024-07-15 13:53:28.228165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x283a30000 len:0x1000 00:12:03.758 [2024-07-15 13:53:28.228249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:03.758 passed 00:12:03.758 Test: blockdev nvme passthru rw ...passed 00:12:03.758 Test: blockdev nvme passthru vendor specific ...[2024-07-15 13:53:28.229156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:03.758 [2024-07-15 13:53:28.229208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:03.758 passed 00:12:03.758 Test: blockdev nvme admin passthru ...passed 00:12:03.758 Test: blockdev copy ...passed 00:12:03.758 Suite: bdevio tests on: Nvme1n1 00:12:03.758 Test: blockdev write read block ...passed 00:12:03.758 Test: blockdev write zeroes read block ...passed 00:12:03.758 Test: blockdev write zeroes read no split ...passed 00:12:03.758 Test: blockdev write zeroes read split ...passed 00:12:03.758 Test: blockdev write zeroes read split partial ...passed 00:12:03.758 Test: blockdev reset ...[2024-07-15 13:53:28.298536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:12:04.016 [2024-07-15 13:53:28.302641] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:04.016 passed 00:12:04.016 Test: blockdev write read 8 blocks ...passed 00:12:04.016 Test: blockdev write read size > 128k ...passed 00:12:04.016 Test: blockdev write read invalid size ...passed 00:12:04.016 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:04.016 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:04.016 Test: blockdev write read max offset ...passed 00:12:04.016 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:04.017 Test: blockdev writev readv 8 blocks ...passed 00:12:04.017 Test: blockdev writev readv 30 x 1block ...passed 00:12:04.017 Test: blockdev writev readv block ...passed 00:12:04.017 Test: blockdev writev readv size > 128k ...passed 00:12:04.017 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:04.017 Test: blockdev comparev and writev ...[2024-07-15 13:53:28.310156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27220e000 len:0x1000 00:12:04.017 [2024-07-15 13:53:28.310223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:04.017 passed 00:12:04.017 Test: blockdev nvme passthru rw ...passed 00:12:04.017 Test: blockdev nvme passthru vendor specific ...[2024-07-15 13:53:28.311080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:04.017 [2024-07-15 13:53:28.311136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:04.017 passed 00:12:04.017 Test: blockdev nvme admin passthru ...passed 00:12:04.017 Test: blockdev copy ...passed 00:12:04.017 Suite: bdevio tests on: Nvme0n1p2 00:12:04.017 Test: blockdev write read block ...passed 00:12:04.017 Test: blockdev write zeroes read block ...passed 00:12:04.017 Test: blockdev write zeroes read no split ...passed 00:12:04.017 Test: blockdev write zeroes read split ...passed 00:12:04.017 Test: blockdev write zeroes read split partial ...passed 00:12:04.017 Test: blockdev reset ...[2024-07-15 13:53:28.415015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:12:04.017 [2024-07-15 13:53:28.419186] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:04.017 passed 00:12:04.017 Test: blockdev write read 8 blocks ...passed 00:12:04.017 Test: blockdev write read size > 128k ...passed 00:12:04.017 Test: blockdev write read invalid size ...passed 00:12:04.017 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:04.017 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:04.017 Test: blockdev write read max offset ...passed 00:12:04.017 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:04.017 Test: blockdev writev readv 8 blocks ...passed 00:12:04.017 Test: blockdev writev readv 30 x 1block ...passed 00:12:04.017 Test: blockdev writev readv block ...passed 00:12:04.017 Test: blockdev writev readv size > 128k ...passed 00:12:04.017 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:04.017 Test: blockdev comparev and writev ...[2024-07-15 13:53:28.428132] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:12:04.017 separate metadata which is not supported yet. 00:12:04.017 passed 00:12:04.017 Test: blockdev nvme passthru rw ...passed 00:12:04.017 Test: blockdev nvme passthru vendor specific ...passed 00:12:04.017 Test: blockdev nvme admin passthru ...passed 00:12:04.017 Test: blockdev copy ...passed 00:12:04.017 Suite: bdevio tests on: Nvme0n1p1 00:12:04.017 Test: blockdev write read block ...passed 00:12:04.017 Test: blockdev write zeroes read block ...passed 00:12:04.017 Test: blockdev write zeroes read no split ...passed 00:12:04.017 Test: blockdev write zeroes read split ...passed 00:12:04.017 Test: blockdev write zeroes read split partial ...passed 00:12:04.017 Test: blockdev reset ...[2024-07-15 13:53:28.502684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:12:04.017 [2024-07-15 13:53:28.506395] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:04.017 passed 00:12:04.017 Test: blockdev write read 8 blocks ...passed 00:12:04.017 Test: blockdev write read size > 128k ...passed 00:12:04.017 Test: blockdev write read invalid size ...passed 00:12:04.017 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:04.017 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:04.017 Test: blockdev write read max offset ...passed 00:12:04.017 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:04.017 Test: blockdev writev readv 8 blocks ...passed 00:12:04.017 Test: blockdev writev readv 30 x 1block ...passed 00:12:04.017 Test: blockdev writev readv block ...passed 00:12:04.017 Test: blockdev writev readv size > 128k ...passed 00:12:04.017 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:04.017 Test: blockdev comparev and writev ...passed 00:12:04.017 Test: blockdev nvme passthru rw ...passed 00:12:04.017 Test: blockdev nvme passthru vendor specific ...[2024-07-15 13:53:28.513484] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:12:04.017 separate metadata which is not supported yet. 00:12:04.017 passed 00:12:04.017 Test: blockdev nvme admin passthru ...passed 00:12:04.017 Test: blockdev copy ...passed 00:12:04.017 00:12:04.017 Run Summary: Type Total Ran Passed Failed Inactive 00:12:04.017 suites 7 7 n/a 0 0 00:12:04.017 tests 161 161 161 0 0 00:12:04.017 asserts 1006 1006 1006 0 n/a 00:12:04.017 00:12:04.017 Elapsed time = 1.696 seconds 00:12:04.017 0 00:12:04.017 13:53:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 68367 00:12:04.017 13:53:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 68367 ']' 00:12:04.017 13:53:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 68367 00:12:04.017 13:53:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:12:04.017 13:53:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:04.017 13:53:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68367 00:12:04.276 13:53:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:04.276 13:53:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:04.276 killing process with pid 68367 00:12:04.276 13:53:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68367' 00:12:04.276 13:53:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@967 -- # kill 68367 00:12:04.276 13:53:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # wait 68367 00:12:05.216 13:53:29 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:12:05.216 00:12:05.216 real 0m2.928s 00:12:05.216 user 0m7.223s 00:12:05.216 sys 0m0.379s 00:12:05.216 13:53:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:05.217 13:53:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:05.217 ************************************ 00:12:05.217 END TEST bdev_bounds 00:12:05.217 ************************************ 00:12:05.217 13:53:29 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:12:05.217 13:53:29 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:12:05.217 13:53:29 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:05.217 13:53:29 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:05.217 13:53:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:05.217 ************************************ 00:12:05.217 START TEST bdev_nbd 00:12:05.217 ************************************ 00:12:05.217 13:53:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:12:05.217 13:53:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:12:05.217 13:53:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:12:05.217 13:53:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:05.217 13:53:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:05.217 13:53:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:05.217 13:53:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:12:05.217 13:53:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=7 00:12:05.217 13:53:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:12:05.217 13:53:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:05.217 13:53:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:12:05.217 13:53:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=7 00:12:05.217 13:53:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:05.217 13:53:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:12:05.217 13:53:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:05.217 13:53:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:12:05.217 13:53:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=68432 00:12:05.217 13:53:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:05.217 13:53:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:05.217 13:53:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 68432 /var/tmp/spdk-nbd.sock 00:12:05.217 13:53:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 68432 ']' 00:12:05.217 13:53:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:05.217 13:53:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:05.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:05.217 13:53:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:05.217 13:53:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:05.217 13:53:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:05.217 [2024-07-15 13:53:29.736489] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:12:05.217 [2024-07-15 13:53:29.736669] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.475 [2024-07-15 13:53:29.909424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.817 [2024-07-15 13:53:30.163421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.398 13:53:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:06.398 13:53:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:12:06.398 13:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:12:06.398 13:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:06.398 13:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:06.398 13:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:06.398 13:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:12:06.398 13:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:06.398 13:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:06.398 13:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:06.398 13:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:06.398 13:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:06.398 13:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:06.398 13:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:06.398 13:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:12:06.656 13:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:06.656 13:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:06.656 13:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:06.656 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:06.656 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:06.656 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:06.656 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:06.656 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:06.656 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:06.656 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:06.656 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:06.656 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:06.656 1+0 records in 00:12:06.656 1+0 records out 00:12:06.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000507638 s, 8.1 MB/s 00:12:06.656 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.656 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:06.656 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.656 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:06.656 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:06.656 13:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:06.656 13:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:06.656 13:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:12:06.914 13:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:06.914 13:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:06.914 13:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:06.914 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:06.914 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:06.914 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:06.914 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:06.914 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:06.914 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:06.914 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:06.914 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:06.914 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:06.914 1+0 records in 00:12:06.914 1+0 records out 00:12:06.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000684714 s, 6.0 MB/s 00:12:06.914 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.914 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:06.914 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.914 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:06.914 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:06.914 13:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:06.914 13:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:06.914 13:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:12:07.171 13:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:07.171 13:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:07.427 13:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:07.427 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:12:07.427 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:07.427 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:07.427 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:07.427 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:12:07.427 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:07.427 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:07.427 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:07.427 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:07.427 1+0 records in 00:12:07.427 1+0 records out 00:12:07.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549989 s, 7.4 MB/s 00:12:07.427 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.427 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:07.427 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.427 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:07.427 13:53:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:07.427 13:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:07.427 13:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:07.427 13:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:12:07.684 13:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:07.684 13:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:07.684 13:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:07.684 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:12:07.684 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:07.684 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:07.684 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:07.684 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:12:07.684 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:07.684 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:07.684 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:07.684 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:07.684 1+0 records in 00:12:07.684 1+0 records out 00:12:07.684 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580152 s, 7.1 MB/s 00:12:07.684 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.684 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:07.684 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.684 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:07.684 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:07.684 13:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:07.684 13:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:07.684 13:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:12:07.940 13:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:07.940 13:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:07.940 13:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:07.940 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:12:07.940 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:07.940 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:07.940 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:07.940 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:12:07.940 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:07.940 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:07.940 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:07.940 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:07.940 1+0 records in 00:12:07.940 1+0 records out 00:12:07.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000541796 s, 7.6 MB/s 00:12:07.940 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.940 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:07.940 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.940 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:07.940 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:07.940 13:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:07.940 13:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:07.940 13:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:12:08.197 13:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:08.197 13:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:08.197 13:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:08.197 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:12:08.197 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:08.197 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:08.197 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:08.197 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:12:08.197 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:08.197 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:08.197 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:08.197 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:08.197 1+0 records in 00:12:08.197 1+0 records out 00:12:08.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000631373 s, 6.5 MB/s 00:12:08.197 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.197 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:08.197 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.197 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:08.197 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:08.197 13:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:08.197 13:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:08.197 13:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:12:08.457 13:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:12:08.457 13:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:12:08.457 13:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:12:08.457 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:12:08.457 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:08.457 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:08.457 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:08.457 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:12:08.457 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:08.457 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:08.457 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:08.457 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:08.457 1+0 records in 00:12:08.457 1+0 records out 00:12:08.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00071031 s, 5.8 MB/s 00:12:08.457 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.457 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:08.457 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.457 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:08.457 13:53:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:08.457 13:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:08.457 13:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:08.457 13:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:08.714 13:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:08.714 { 00:12:08.714 "nbd_device": "/dev/nbd0", 00:12:08.714 "bdev_name": "Nvme0n1p1" 00:12:08.714 }, 00:12:08.714 { 00:12:08.714 "nbd_device": "/dev/nbd1", 00:12:08.714 "bdev_name": "Nvme0n1p2" 00:12:08.714 }, 00:12:08.714 { 00:12:08.714 "nbd_device": "/dev/nbd2", 00:12:08.714 "bdev_name": "Nvme1n1" 00:12:08.714 }, 00:12:08.714 { 00:12:08.714 "nbd_device": "/dev/nbd3", 00:12:08.714 "bdev_name": "Nvme2n1" 00:12:08.714 }, 00:12:08.714 { 00:12:08.714 "nbd_device": "/dev/nbd4", 00:12:08.714 "bdev_name": "Nvme2n2" 00:12:08.714 }, 00:12:08.714 { 00:12:08.714 "nbd_device": "/dev/nbd5", 00:12:08.714 "bdev_name": "Nvme2n3" 00:12:08.714 }, 00:12:08.714 { 00:12:08.714 "nbd_device": "/dev/nbd6", 00:12:08.714 "bdev_name": "Nvme3n1" 00:12:08.714 } 00:12:08.714 ]' 00:12:08.714 13:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:08.714 13:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:08.714 13:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:08.714 { 00:12:08.714 "nbd_device": "/dev/nbd0", 00:12:08.714 "bdev_name": "Nvme0n1p1" 00:12:08.714 }, 00:12:08.714 { 00:12:08.714 "nbd_device": "/dev/nbd1", 00:12:08.714 "bdev_name": "Nvme0n1p2" 00:12:08.714 }, 00:12:08.714 { 00:12:08.714 "nbd_device": "/dev/nbd2", 00:12:08.714 "bdev_name": "Nvme1n1" 00:12:08.714 }, 00:12:08.714 { 00:12:08.714 "nbd_device": "/dev/nbd3", 00:12:08.714 "bdev_name": "Nvme2n1" 00:12:08.714 }, 00:12:08.714 { 00:12:08.714 "nbd_device": "/dev/nbd4", 00:12:08.714 "bdev_name": "Nvme2n2" 00:12:08.714 }, 00:12:08.714 { 00:12:08.714 "nbd_device": "/dev/nbd5", 00:12:08.714 "bdev_name": "Nvme2n3" 00:12:08.714 }, 00:12:08.714 { 00:12:08.714 "nbd_device": "/dev/nbd6", 00:12:08.714 "bdev_name": "Nvme3n1" 00:12:08.714 } 00:12:08.714 ]' 00:12:08.972 13:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:12:08.972 13:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:08.972 13:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:12:08.972 13:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:08.972 13:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:08.972 13:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:08.972 13:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:09.228 13:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:09.228 13:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:09.228 13:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:09.228 13:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:09.228 13:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:09.228 13:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:09.228 13:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:09.228 13:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:09.228 13:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:09.228 13:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:09.485 13:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:09.485 13:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:09.485 13:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:09.485 13:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:09.485 13:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:09.485 13:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:09.485 13:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:09.485 13:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:09.485 13:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:09.485 13:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:09.742 13:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:09.742 13:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:09.742 13:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:09.742 13:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:09.742 13:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:09.742 13:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:09.742 13:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:09.742 13:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:09.742 13:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:09.742 13:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:09.999 13:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:09.999 13:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:09.999 13:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:09.999 13:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:09.999 13:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:09.999 13:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:09.999 13:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:09.999 13:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:09.999 13:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:09.999 13:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:10.564 13:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:10.564 13:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:10.564 13:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:10.564 13:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:10.564 13:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:10.564 13:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:10.564 13:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:10.564 13:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:10.564 13:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:10.564 13:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:10.564 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:10.564 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:10.564 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:10.564 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:10.564 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:10.564 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:10.564 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:10.564 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:10.564 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:10.564 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:11.127 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:11.127 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:11.127 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:11.127 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.127 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.127 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:11.127 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:11.127 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.127 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:11.127 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:11.127 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:11.385 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:11.385 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:11.385 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:11.385 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:11.385 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:11.385 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:11.385 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:11.385 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:11.385 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:11.385 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:11.385 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:11.385 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:11.385 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:12:11.385 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:11.385 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:11.385 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:11.385 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:11.385 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:11.385 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:12:11.385 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:11.385 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:11.385 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:11.385 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:11.385 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:11.385 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:11.385 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:11.385 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:11.385 13:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:12:11.659 /dev/nbd0 00:12:11.659 13:53:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:11.659 13:53:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:11.659 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:11.659 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:11.659 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:11.659 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:11.659 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:11.659 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:11.659 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:11.659 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:11.659 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:11.659 1+0 records in 00:12:11.659 1+0 records out 00:12:11.659 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536247 s, 7.6 MB/s 00:12:11.659 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.659 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:11.659 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.659 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:11.659 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:11.659 13:53:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:11.659 13:53:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:11.659 13:53:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:12:11.986 /dev/nbd1 00:12:11.986 13:53:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:11.986 13:53:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:11.986 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:11.986 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:11.986 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:11.986 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:11.986 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:11.986 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:11.986 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:11.986 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:11.986 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:11.986 1+0 records in 00:12:11.986 1+0 records out 00:12:11.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588757 s, 7.0 MB/s 00:12:11.986 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.986 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:11.986 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.986 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:11.986 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:11.986 13:53:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:11.986 13:53:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:11.986 13:53:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:12:12.244 /dev/nbd10 00:12:12.244 13:53:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:12.244 13:53:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:12.244 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:12:12.244 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:12.244 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:12.244 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:12.244 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:12:12.244 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:12.244 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:12.244 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:12.244 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:12.244 1+0 records in 00:12:12.244 1+0 records out 00:12:12.244 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473971 s, 8.6 MB/s 00:12:12.244 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.244 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:12.244 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.244 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:12.244 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:12.244 13:53:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:12.244 13:53:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:12.244 13:53:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:12:12.503 /dev/nbd11 00:12:12.503 13:53:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:12.503 13:53:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:12.503 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:12:12.503 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:12.503 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:12.503 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:12.503 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:12:12.503 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:12.503 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:12.503 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:12.503 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:12.503 1+0 records in 00:12:12.503 1+0 records out 00:12:12.503 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000560988 s, 7.3 MB/s 00:12:12.503 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.503 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:12.503 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.503 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:12.503 13:53:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:12.503 13:53:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:12.503 13:53:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:12.503 13:53:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:12:12.761 /dev/nbd12 00:12:12.761 13:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:12.761 13:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:12.761 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:12:12.761 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:12.761 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:12.761 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:12.761 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:12:12.761 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:12.761 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:12.761 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:12.761 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:12.761 1+0 records in 00:12:12.761 1+0 records out 00:12:12.761 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488587 s, 8.4 MB/s 00:12:12.761 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.020 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:13.020 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.020 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:13.020 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:13.020 13:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:13.020 13:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:13.020 13:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:12:13.278 /dev/nbd13 00:12:13.278 13:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:13.278 13:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:13.278 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:12:13.278 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:13.278 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:13.278 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:13.278 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:12:13.278 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:13.278 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:13.278 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:13.278 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:13.278 1+0 records in 00:12:13.278 1+0 records out 00:12:13.278 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00076266 s, 5.4 MB/s 00:12:13.278 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.278 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:13.278 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.278 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:13.278 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:13.278 13:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:13.278 13:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:13.278 13:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:12:13.536 /dev/nbd14 00:12:13.536 13:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:12:13.536 13:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:12:13.536 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:12:13.536 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:13.536 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:13.536 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:13.536 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:12:13.536 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:13.536 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:13.536 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:13.536 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:13.536 1+0 records in 00:12:13.536 1+0 records out 00:12:13.536 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000548198 s, 7.5 MB/s 00:12:13.536 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.536 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:13.536 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.536 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:13.537 13:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:13.537 13:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:13.537 13:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:13.537 13:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:13.537 13:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:13.537 13:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:13.795 13:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:13.795 { 00:12:13.795 "nbd_device": "/dev/nbd0", 00:12:13.795 "bdev_name": "Nvme0n1p1" 00:12:13.795 }, 00:12:13.795 { 00:12:13.795 "nbd_device": "/dev/nbd1", 00:12:13.795 "bdev_name": "Nvme0n1p2" 00:12:13.795 }, 00:12:13.795 { 00:12:13.795 "nbd_device": "/dev/nbd10", 00:12:13.795 "bdev_name": "Nvme1n1" 00:12:13.795 }, 00:12:13.795 { 00:12:13.795 "nbd_device": "/dev/nbd11", 00:12:13.795 "bdev_name": "Nvme2n1" 00:12:13.795 }, 00:12:13.795 { 00:12:13.795 "nbd_device": "/dev/nbd12", 00:12:13.795 "bdev_name": "Nvme2n2" 00:12:13.795 }, 00:12:13.795 { 00:12:13.795 "nbd_device": "/dev/nbd13", 00:12:13.795 "bdev_name": "Nvme2n3" 00:12:13.795 }, 00:12:13.795 { 00:12:13.795 "nbd_device": "/dev/nbd14", 00:12:13.795 "bdev_name": "Nvme3n1" 00:12:13.795 } 00:12:13.795 ]' 00:12:13.795 13:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:13.795 13:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:13.795 { 00:12:13.795 "nbd_device": "/dev/nbd0", 00:12:13.795 "bdev_name": "Nvme0n1p1" 00:12:13.795 }, 00:12:13.795 { 00:12:13.795 "nbd_device": "/dev/nbd1", 00:12:13.795 "bdev_name": "Nvme0n1p2" 00:12:13.795 }, 00:12:13.795 { 00:12:13.795 "nbd_device": "/dev/nbd10", 00:12:13.795 "bdev_name": "Nvme1n1" 00:12:13.795 }, 00:12:13.795 { 00:12:13.795 "nbd_device": "/dev/nbd11", 00:12:13.795 "bdev_name": "Nvme2n1" 00:12:13.795 }, 00:12:13.795 { 00:12:13.795 "nbd_device": "/dev/nbd12", 00:12:13.795 "bdev_name": "Nvme2n2" 00:12:13.795 }, 00:12:13.795 { 00:12:13.795 "nbd_device": "/dev/nbd13", 00:12:13.795 "bdev_name": "Nvme2n3" 00:12:13.795 }, 00:12:13.795 { 00:12:13.795 "nbd_device": "/dev/nbd14", 00:12:13.795 "bdev_name": "Nvme3n1" 00:12:13.795 } 00:12:13.795 ]' 00:12:13.795 13:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:13.795 /dev/nbd1 00:12:13.795 /dev/nbd10 00:12:13.795 /dev/nbd11 00:12:13.795 /dev/nbd12 00:12:13.795 /dev/nbd13 00:12:13.795 /dev/nbd14' 00:12:13.795 13:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:13.795 /dev/nbd1 00:12:13.795 /dev/nbd10 00:12:13.795 /dev/nbd11 00:12:13.795 /dev/nbd12 00:12:13.795 /dev/nbd13 00:12:13.795 /dev/nbd14' 00:12:13.795 13:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:13.795 13:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:12:13.795 13:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:12:13.795 13:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:12:13.795 13:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:12:13.795 13:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:12:13.795 13:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:13.795 13:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:13.795 13:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:13.795 13:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:13.795 13:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:13.795 13:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:13.795 256+0 records in 00:12:13.795 256+0 records out 00:12:13.795 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00469692 s, 223 MB/s 00:12:13.795 13:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:13.795 13:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:14.054 256+0 records in 00:12:14.054 256+0 records out 00:12:14.054 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141991 s, 7.4 MB/s 00:12:14.054 13:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:14.054 13:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:14.054 256+0 records in 00:12:14.054 256+0 records out 00:12:14.054 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139105 s, 7.5 MB/s 00:12:14.054 13:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:14.054 13:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:14.312 256+0 records in 00:12:14.312 256+0 records out 00:12:14.312 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.174797 s, 6.0 MB/s 00:12:14.312 13:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:14.312 13:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:14.571 256+0 records in 00:12:14.571 256+0 records out 00:12:14.571 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162268 s, 6.5 MB/s 00:12:14.571 13:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:14.571 13:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:14.571 256+0 records in 00:12:14.571 256+0 records out 00:12:14.571 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166319 s, 6.3 MB/s 00:12:14.571 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:14.571 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:14.829 256+0 records in 00:12:14.829 256+0 records out 00:12:14.829 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164417 s, 6.4 MB/s 00:12:14.829 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:14.829 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:12:15.088 256+0 records in 00:12:15.088 256+0 records out 00:12:15.088 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169815 s, 6.2 MB/s 00:12:15.088 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:12:15.088 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:15.088 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:15.088 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:15.088 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:15.088 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:15.088 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:15.088 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:15.088 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:15.088 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:15.088 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:15.088 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:15.088 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:15.088 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:15.088 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:15.088 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:15.088 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:15.088 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:15.088 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:15.088 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:15.088 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:12:15.088 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:15.088 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:12:15.088 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:15.088 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:15.088 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:15.088 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:15.088 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:15.088 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:15.347 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:15.347 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:15.347 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:15.347 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:15.347 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:15.347 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:15.347 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:15.347 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:15.347 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:15.347 13:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:15.604 13:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:15.604 13:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:15.604 13:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:15.604 13:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:15.604 13:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:15.604 13:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:15.604 13:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:15.604 13:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:15.604 13:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:15.604 13:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:15.860 13:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:16.117 13:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:16.117 13:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:16.117 13:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:16.117 13:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:16.117 13:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:16.117 13:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:16.117 13:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:16.117 13:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:16.117 13:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:16.374 13:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:16.374 13:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:16.374 13:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:16.374 13:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:16.374 13:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:16.374 13:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:16.374 13:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:16.374 13:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:16.374 13:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:16.374 13:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:16.655 13:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:16.655 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:16.656 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:16.656 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:16.656 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:16.656 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:16.656 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:16.656 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:16.656 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:16.656 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:16.924 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:16.924 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:16.924 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:16.924 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:16.924 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:16.924 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:16.924 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:16.924 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:16.924 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:16.924 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:17.181 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:17.181 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:17.181 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:17.181 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:17.181 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:17.181 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:17.181 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:17.181 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:17.181 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:17.181 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:17.181 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:17.440 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:17.440 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:17.440 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:17.440 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:17.440 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:17.440 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:17.440 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:17.440 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:17.440 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:17.440 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:17.440 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:17.440 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:17.440 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:12:17.440 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:17.440 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:17.440 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:17.440 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:17.440 13:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:18.005 malloc_lvol_verify 00:12:18.005 13:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:18.263 7768cde1-7059-48fe-893e-6ce224e77cdd 00:12:18.263 13:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:18.520 a21fe9f2-3112-4b76-b2d6-3727926bca45 00:12:18.520 13:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:18.779 /dev/nbd0 00:12:18.779 13:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:18.779 mke2fs 1.46.5 (30-Dec-2021) 00:12:18.779 Discarding device blocks: 0/4096 done 00:12:18.779 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:18.779 00:12:18.779 Allocating group tables: 0/1 done 00:12:18.779 Writing inode tables: 0/1 done 00:12:18.779 Creating journal (1024 blocks): done 00:12:18.779 Writing superblocks and filesystem accounting information: 0/1 done 00:12:18.779 00:12:18.779 13:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:18.779 13:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:18.779 13:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:18.779 13:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:18.779 13:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:18.779 13:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:18.779 13:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:18.779 13:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:19.036 13:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:19.036 13:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:19.036 13:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:19.036 13:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:19.036 13:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:19.036 13:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:19.036 13:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:19.036 13:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:19.036 13:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:19.036 13:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:12:19.036 13:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 68432 00:12:19.036 13:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 68432 ']' 00:12:19.036 13:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 68432 00:12:19.036 13:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:12:19.036 13:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:19.036 13:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68432 00:12:19.036 killing process with pid 68432 00:12:19.036 13:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:19.036 13:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:19.036 13:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68432' 00:12:19.036 13:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@967 -- # kill 68432 00:12:19.036 13:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # wait 68432 00:12:20.409 ************************************ 00:12:20.409 END TEST bdev_nbd 00:12:20.409 ************************************ 00:12:20.409 13:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:12:20.409 00:12:20.409 real 0m15.013s 00:12:20.409 user 0m21.232s 00:12:20.409 sys 0m4.790s 00:12:20.409 13:53:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:20.409 13:53:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:20.410 13:53:44 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:12:20.410 13:53:44 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:12:20.410 13:53:44 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:12:20.410 13:53:44 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:12:20.410 skipping fio tests on NVMe due to multi-ns failures. 00:12:20.410 13:53:44 blockdev_nvme_gpt -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:12:20.410 13:53:44 blockdev_nvme_gpt -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:20.410 13:53:44 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:20.410 13:53:44 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:12:20.410 13:53:44 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:20.410 13:53:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:20.410 ************************************ 00:12:20.410 START TEST bdev_verify 00:12:20.410 ************************************ 00:12:20.410 13:53:44 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:20.410 [2024-07-15 13:53:44.750108] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:12:20.410 [2024-07-15 13:53:44.750505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68882 ] 00:12:20.410 [2024-07-15 13:53:44.927326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:20.667 [2024-07-15 13:53:45.159701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.667 [2024-07-15 13:53:45.159715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.600 Running I/O for 5 seconds... 00:12:26.865 00:12:26.865 Latency(us) 00:12:26.865 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.865 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:26.865 Verification LBA range: start 0x0 length 0x5e800 00:12:26.865 Nvme0n1p1 : 5.11 1266.15 4.95 0.00 0.00 100431.12 17039.36 96278.34 00:12:26.865 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:26.865 Verification LBA range: start 0x5e800 length 0x5e800 00:12:26.865 Nvme0n1p1 : 5.11 1226.25 4.79 0.00 0.00 104150.74 19899.11 104380.97 00:12:26.865 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:26.865 Verification LBA range: start 0x0 length 0x5e7ff 00:12:26.865 Nvme0n1p2 : 5.11 1265.55 4.94 0.00 0.00 100197.49 15371.17 92465.34 00:12:26.865 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:26.865 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:12:26.865 Nvme0n1p2 : 5.12 1225.72 4.79 0.00 0.00 104010.88 20256.58 98661.47 00:12:26.865 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:26.865 Verification LBA range: start 0x0 length 0xa0000 00:12:26.865 Nvme1n1 : 5.11 1265.00 4.94 0.00 0.00 99982.31 15252.01 88652.33 00:12:26.865 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:26.865 Verification LBA range: start 0xa0000 length 0xa0000 00:12:26.865 Nvme1n1 : 5.12 1225.22 4.79 0.00 0.00 103868.55 20494.89 92941.96 00:12:26.865 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:26.865 Verification LBA range: start 0x0 length 0x80000 00:12:26.865 Nvme2n1 : 5.12 1274.07 4.98 0.00 0.00 99454.56 9592.09 86269.21 00:12:26.865 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:26.865 Verification LBA range: start 0x80000 length 0x80000 00:12:26.865 Nvme2n1 : 5.12 1224.70 4.78 0.00 0.00 103735.22 20375.74 91035.46 00:12:26.865 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:26.865 Verification LBA range: start 0x0 length 0x80000 00:12:26.865 Nvme2n2 : 5.13 1273.05 4.97 0.00 0.00 99275.83 12332.68 89605.59 00:12:26.865 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:26.865 Verification LBA range: start 0x80000 length 0x80000 00:12:26.865 Nvme2n2 : 5.12 1224.17 4.78 0.00 0.00 103576.39 20018.27 94848.47 00:12:26.865 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:26.865 Verification LBA range: start 0x0 length 0x80000 00:12:26.865 Nvme2n3 : 5.13 1272.12 4.97 0.00 0.00 99134.64 14596.65 92465.34 00:12:26.865 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:26.865 Verification LBA range: start 0x80000 length 0x80000 00:12:26.865 Nvme2n3 : 5.13 1223.41 4.78 0.00 0.00 103430.38 20018.27 98184.84 00:12:26.865 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:26.865 Verification LBA range: start 0x0 length 0x20000 00:12:26.865 Nvme3n1 : 5.13 1271.61 4.97 0.00 0.00 99047.73 15252.01 94848.47 00:12:26.865 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:26.865 Verification LBA range: start 0x20000 length 0x20000 00:12:26.865 Nvme3n1 : 5.13 1222.95 4.78 0.00 0.00 103272.02 11736.90 103427.72 00:12:26.866 =================================================================================================================== 00:12:26.866 Total : 17460.00 68.20 0.00 0.00 101645.77 9592.09 104380.97 00:12:28.242 00:12:28.242 real 0m7.833s 00:12:28.242 user 0m14.234s 00:12:28.242 sys 0m0.274s 00:12:28.242 13:53:52 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:28.242 13:53:52 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:12:28.242 ************************************ 00:12:28.242 END TEST bdev_verify 00:12:28.242 ************************************ 00:12:28.242 13:53:52 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:12:28.242 13:53:52 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:28.242 13:53:52 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:12:28.242 13:53:52 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:28.242 13:53:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:28.242 ************************************ 00:12:28.242 START TEST bdev_verify_big_io 00:12:28.242 ************************************ 00:12:28.242 13:53:52 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:28.242 [2024-07-15 13:53:52.644691] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:12:28.242 [2024-07-15 13:53:52.644846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68986 ] 00:12:28.501 [2024-07-15 13:53:52.810008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:28.501 [2024-07-15 13:53:53.042542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.501 [2024-07-15 13:53:53.042554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.465 Running I/O for 5 seconds... 00:12:36.025 00:12:36.025 Latency(us) 00:12:36.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:36.025 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:36.025 Verification LBA range: start 0x0 length 0x5e80 00:12:36.025 Nvme0n1p1 : 5.75 105.81 6.61 0.00 0.00 1172428.85 34793.66 1151527.10 00:12:36.025 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:36.025 Verification LBA range: start 0x5e80 length 0x5e80 00:12:36.025 Nvme0n1p1 : 5.97 101.91 6.37 0.00 0.00 1195920.68 21448.15 1151527.10 00:12:36.025 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:36.025 Verification LBA range: start 0x0 length 0x5e7f 00:12:36.025 Nvme0n1p2 : 5.84 104.03 6.50 0.00 0.00 1143633.77 100567.97 1014258.97 00:12:36.025 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:36.026 Verification LBA range: start 0x5e7f length 0x5e7f 00:12:36.026 Nvme0n1p2 : 5.87 103.55 6.47 0.00 0.00 1155988.41 119632.99 983754.94 00:12:36.026 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:36.026 Verification LBA range: start 0x0 length 0xa000 00:12:36.026 Nvme1n1 : 5.85 109.42 6.84 0.00 0.00 1077226.40 96278.34 1044763.00 00:12:36.026 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:36.026 Verification LBA range: start 0xa000 length 0xa000 00:12:36.026 Nvme1n1 : 5.97 107.23 6.70 0.00 0.00 1090143.70 93418.59 854112.81 00:12:36.026 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:36.026 Verification LBA range: start 0x0 length 0x8000 00:12:36.026 Nvme2n1 : 5.85 109.37 6.84 0.00 0.00 1044503.09 97708.22 1075267.03 00:12:36.026 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:36.026 Verification LBA range: start 0x8000 length 0x8000 00:12:36.026 Nvme2n1 : 5.97 107.18 6.70 0.00 0.00 1058702.06 94848.47 991380.95 00:12:36.026 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:36.026 Verification LBA range: start 0x0 length 0x8000 00:12:36.026 Nvme2n2 : 5.93 112.56 7.03 0.00 0.00 985699.67 71017.19 1098145.05 00:12:36.026 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:36.026 Verification LBA range: start 0x8000 length 0x8000 00:12:36.026 Nvme2n2 : 5.99 103.14 6.45 0.00 0.00 1073482.09 17873.45 1906501.82 00:12:36.026 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:36.026 Verification LBA range: start 0x0 length 0x8000 00:12:36.026 Nvme2n3 : 5.99 123.47 7.72 0.00 0.00 880387.60 17396.83 1136275.08 00:12:36.026 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:36.026 Verification LBA range: start 0x8000 length 0x8000 00:12:36.026 Nvme2n3 : 6.01 108.83 6.80 0.00 0.00 991921.61 14417.92 2226794.12 00:12:36.026 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:36.026 Verification LBA range: start 0x0 length 0x2000 00:12:36.026 Nvme3n1 : 5.99 132.30 8.27 0.00 0.00 798163.21 5600.35 1166779.11 00:12:36.026 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:36.026 Verification LBA range: start 0x2000 length 0x2000 00:12:36.026 Nvme3n1 : 6.03 124.12 7.76 0.00 0.00 844260.54 3291.69 2013265.92 00:12:36.026 =================================================================================================================== 00:12:36.026 Total : 1552.90 97.06 0.00 0.00 1026572.55 3291.69 2226794.12 00:12:37.402 00:12:37.402 real 0m9.136s 00:12:37.402 user 0m16.820s 00:12:37.402 sys 0m0.302s 00:12:37.402 13:54:01 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:37.402 13:54:01 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.402 ************************************ 00:12:37.402 END TEST bdev_verify_big_io 00:12:37.402 ************************************ 00:12:37.402 13:54:01 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:12:37.402 13:54:01 blockdev_nvme_gpt -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:37.402 13:54:01 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:12:37.402 13:54:01 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:37.402 13:54:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:37.402 ************************************ 00:12:37.402 START TEST bdev_write_zeroes 00:12:37.402 ************************************ 00:12:37.402 13:54:01 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:37.402 [2024-07-15 13:54:01.807087] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:12:37.402 [2024-07-15 13:54:01.807238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69103 ] 00:12:37.662 [2024-07-15 13:54:01.970387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.662 [2024-07-15 13:54:02.189436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.598 Running I/O for 1 seconds... 00:12:39.529 00:12:39.529 Latency(us) 00:12:39.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:39.529 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:39.529 Nvme0n1p1 : 1.03 6276.50 24.52 0.00 0.00 20253.94 14358.34 34078.72 00:12:39.529 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:39.529 Nvme0n1p2 : 1.03 6256.59 24.44 0.00 0.00 20267.31 14060.45 35270.28 00:12:39.529 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:39.529 Nvme1n1 : 1.04 6238.38 24.37 0.00 0.00 20258.92 15073.28 34317.03 00:12:39.529 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:39.529 Nvme2n1 : 1.04 6260.72 24.46 0.00 0.00 20169.18 14060.45 29074.15 00:12:39.529 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:39.529 Nvme2n2 : 1.05 6242.77 24.39 0.00 0.00 20161.52 13762.56 28120.90 00:12:39.529 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:39.529 Nvme2n3 : 1.05 6229.59 24.33 0.00 0.00 20155.89 13405.09 28240.06 00:12:39.529 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:39.529 Nvme3n1 : 1.05 6220.47 24.30 0.00 0.00 20127.92 12094.37 28359.21 00:12:39.529 =================================================================================================================== 00:12:39.529 Total : 43725.01 170.80 0.00 0.00 20198.98 12094.37 35270.28 00:12:40.907 00:12:40.907 real 0m3.423s 00:12:40.907 user 0m3.079s 00:12:40.907 sys 0m0.216s 00:12:40.907 13:54:05 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:40.907 ************************************ 00:12:40.907 END TEST bdev_write_zeroes 00:12:40.907 ************************************ 00:12:40.907 13:54:05 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:12:40.907 13:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:12:40.907 13:54:05 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:40.907 13:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:12:40.907 13:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:40.907 13:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:40.907 ************************************ 00:12:40.907 START TEST bdev_json_nonenclosed 00:12:40.907 ************************************ 00:12:40.907 13:54:05 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:40.907 [2024-07-15 13:54:05.290584] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:12:40.908 [2024-07-15 13:54:05.290753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69162 ] 00:12:41.166 [2024-07-15 13:54:05.465300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.166 [2024-07-15 13:54:05.698400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.166 [2024-07-15 13:54:05.698508] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:41.166 [2024-07-15 13:54:05.698533] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:41.166 [2024-07-15 13:54:05.698548] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:41.779 00:12:41.779 real 0m0.930s 00:12:41.779 user 0m0.697s 00:12:41.779 sys 0m0.126s 00:12:41.779 13:54:06 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:12:41.779 13:54:06 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:41.779 13:54:06 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:12:41.779 ************************************ 00:12:41.779 END TEST bdev_json_nonenclosed 00:12:41.779 ************************************ 00:12:41.779 13:54:06 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:12:41.779 13:54:06 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # true 00:12:41.779 13:54:06 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:41.779 13:54:06 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:12:41.779 13:54:06 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:41.779 13:54:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:41.779 ************************************ 00:12:41.779 START TEST bdev_json_nonarray 00:12:41.779 ************************************ 00:12:41.779 13:54:06 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:41.779 [2024-07-15 13:54:06.256985] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:12:41.779 [2024-07-15 13:54:06.257140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69193 ] 00:12:42.037 [2024-07-15 13:54:06.422912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.295 [2024-07-15 13:54:06.670484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.295 [2024-07-15 13:54:06.670615] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:42.295 [2024-07-15 13:54:06.670646] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:42.295 [2024-07-15 13:54:06.670663] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:42.862 00:12:42.862 real 0m0.964s 00:12:42.862 user 0m0.718s 00:12:42.862 sys 0m0.137s 00:12:42.862 13:54:07 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:12:42.862 13:54:07 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:42.862 13:54:07 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:12:42.862 ************************************ 00:12:42.862 END TEST bdev_json_nonarray 00:12:42.862 ************************************ 00:12:42.862 13:54:07 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:12:42.862 13:54:07 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # true 00:12:42.862 13:54:07 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:12:42.862 13:54:07 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:12:42.862 13:54:07 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:12:42.862 13:54:07 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:42.862 13:54:07 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:42.862 13:54:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:42.862 ************************************ 00:12:42.862 START TEST bdev_gpt_uuid 00:12:42.862 ************************************ 00:12:42.862 13:54:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1123 -- # bdev_gpt_uuid 00:12:42.862 13:54:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@614 -- # local bdev 00:12:42.862 13:54:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:12:42.862 13:54:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:42.862 13:54:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69224 00:12:42.862 13:54:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:42.862 13:54:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 69224 00:12:42.862 13:54:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@829 -- # '[' -z 69224 ']' 00:12:42.862 13:54:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.862 13:54:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:42.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.862 13:54:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.862 13:54:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:42.862 13:54:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:42.862 [2024-07-15 13:54:07.279995] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:12:42.862 [2024-07-15 13:54:07.280154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69224 ] 00:12:43.120 [2024-07-15 13:54:07.443785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.378 [2024-07-15 13:54:07.721218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.944 13:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:43.944 13:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # return 0 00:12:43.944 13:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:43.944 13:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.944 13:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:44.511 Some configs were skipped because the RPC state that can call them passed over. 00:12:44.511 13:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.511 13:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:12:44.511 13:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.511 13:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:44.511 13:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.511 13:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:12:44.511 13:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.511 13:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:44.511 13:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.511 13:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # bdev='[ 00:12:44.511 { 00:12:44.511 "name": "Nvme0n1p1", 00:12:44.511 "aliases": [ 00:12:44.511 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:12:44.511 ], 00:12:44.511 "product_name": "GPT Disk", 00:12:44.511 "block_size": 4096, 00:12:44.511 "num_blocks": 774144, 00:12:44.511 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:12:44.511 "md_size": 64, 00:12:44.511 "md_interleave": false, 00:12:44.511 "dif_type": 0, 00:12:44.511 "assigned_rate_limits": { 00:12:44.511 "rw_ios_per_sec": 0, 00:12:44.511 "rw_mbytes_per_sec": 0, 00:12:44.511 "r_mbytes_per_sec": 0, 00:12:44.511 "w_mbytes_per_sec": 0 00:12:44.511 }, 00:12:44.511 "claimed": false, 00:12:44.511 "zoned": false, 00:12:44.511 "supported_io_types": { 00:12:44.511 "read": true, 00:12:44.511 "write": true, 00:12:44.511 "unmap": true, 00:12:44.511 "flush": true, 00:12:44.511 "reset": true, 00:12:44.511 "nvme_admin": false, 00:12:44.511 "nvme_io": false, 00:12:44.511 "nvme_io_md": false, 00:12:44.511 "write_zeroes": true, 00:12:44.511 "zcopy": false, 00:12:44.511 "get_zone_info": false, 00:12:44.511 "zone_management": false, 00:12:44.511 "zone_append": false, 00:12:44.511 "compare": true, 00:12:44.511 "compare_and_write": false, 00:12:44.511 "abort": true, 00:12:44.511 "seek_hole": false, 00:12:44.511 "seek_data": false, 00:12:44.511 "copy": true, 00:12:44.511 "nvme_iov_md": false 00:12:44.511 }, 00:12:44.511 "driver_specific": { 00:12:44.511 "gpt": { 00:12:44.511 "base_bdev": "Nvme0n1", 00:12:44.511 "offset_blocks": 256, 00:12:44.511 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:12:44.511 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:12:44.511 "partition_name": "SPDK_TEST_first" 00:12:44.511 } 00:12:44.511 } 00:12:44.511 } 00:12:44.511 ]' 00:12:44.511 13:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r length 00:12:44.511 13:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:12:44.511 13:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:12:44.512 13:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:12:44.512 13:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:12:44.512 13:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:12:44.512 13:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:12:44.512 13:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.512 13:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:44.512 13:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.512 13:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # bdev='[ 00:12:44.512 { 00:12:44.512 "name": "Nvme0n1p2", 00:12:44.512 "aliases": [ 00:12:44.512 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:12:44.512 ], 00:12:44.512 "product_name": "GPT Disk", 00:12:44.512 "block_size": 4096, 00:12:44.512 "num_blocks": 774143, 00:12:44.512 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:12:44.512 "md_size": 64, 00:12:44.512 "md_interleave": false, 00:12:44.512 "dif_type": 0, 00:12:44.512 "assigned_rate_limits": { 00:12:44.512 "rw_ios_per_sec": 0, 00:12:44.512 "rw_mbytes_per_sec": 0, 00:12:44.512 "r_mbytes_per_sec": 0, 00:12:44.512 "w_mbytes_per_sec": 0 00:12:44.512 }, 00:12:44.512 "claimed": false, 00:12:44.512 "zoned": false, 00:12:44.512 "supported_io_types": { 00:12:44.512 "read": true, 00:12:44.512 "write": true, 00:12:44.512 "unmap": true, 00:12:44.512 "flush": true, 00:12:44.512 "reset": true, 00:12:44.512 "nvme_admin": false, 00:12:44.512 "nvme_io": false, 00:12:44.512 "nvme_io_md": false, 00:12:44.512 "write_zeroes": true, 00:12:44.512 "zcopy": false, 00:12:44.512 "get_zone_info": false, 00:12:44.512 "zone_management": false, 00:12:44.512 "zone_append": false, 00:12:44.512 "compare": true, 00:12:44.512 "compare_and_write": false, 00:12:44.512 "abort": true, 00:12:44.512 "seek_hole": false, 00:12:44.512 "seek_data": false, 00:12:44.512 "copy": true, 00:12:44.512 "nvme_iov_md": false 00:12:44.512 }, 00:12:44.512 "driver_specific": { 00:12:44.512 "gpt": { 00:12:44.512 "base_bdev": "Nvme0n1", 00:12:44.512 "offset_blocks": 774400, 00:12:44.512 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:12:44.512 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:12:44.512 "partition_name": "SPDK_TEST_second" 00:12:44.512 } 00:12:44.512 } 00:12:44.512 } 00:12:44.512 ]' 00:12:44.512 13:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r length 00:12:44.512 13:54:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:12:44.512 13:54:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:12:44.770 13:54:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:12:44.770 13:54:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:12:44.770 13:54:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:12:44.770 13:54:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@631 -- # killprocess 69224 00:12:44.770 13:54:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@948 -- # '[' -z 69224 ']' 00:12:44.770 13:54:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # kill -0 69224 00:12:44.770 13:54:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # uname 00:12:44.770 13:54:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:44.770 13:54:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69224 00:12:44.770 13:54:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:44.770 killing process with pid 69224 00:12:44.770 13:54:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:44.770 13:54:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69224' 00:12:44.770 13:54:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@967 -- # kill 69224 00:12:44.770 13:54:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # wait 69224 00:12:47.292 00:12:47.292 real 0m4.194s 00:12:47.292 user 0m4.502s 00:12:47.292 sys 0m0.448s 00:12:47.292 13:54:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:47.292 ************************************ 00:12:47.292 END TEST bdev_gpt_uuid 00:12:47.292 ************************************ 00:12:47.292 13:54:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:47.292 13:54:11 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:12:47.292 13:54:11 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:12:47.292 13:54:11 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:12:47.292 13:54:11 blockdev_nvme_gpt -- bdev/blockdev.sh@811 -- # cleanup 00:12:47.292 13:54:11 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:12:47.292 13:54:11 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:47.292 13:54:11 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:12:47.292 13:54:11 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:12:47.292 13:54:11 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:12:47.292 13:54:11 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:47.292 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:47.549 Waiting for block devices as requested 00:12:47.549 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:47.549 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:47.807 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:47.807 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:53.085 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:53.085 13:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme1n1 ]] 00:12:53.085 13:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme1n1 00:12:53.085 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:12:53.085 /dev/nvme1n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:12:53.085 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:12:53.085 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:12:53.085 13:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:12:53.085 00:12:53.085 real 1m6.408s 00:12:53.085 user 1m24.839s 00:12:53.085 sys 0m9.822s 00:12:53.085 13:54:17 blockdev_nvme_gpt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:53.085 13:54:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:53.085 ************************************ 00:12:53.085 END TEST blockdev_nvme_gpt 00:12:53.085 ************************************ 00:12:53.343 13:54:17 -- common/autotest_common.sh@1142 -- # return 0 00:12:53.343 13:54:17 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:12:53.343 13:54:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:53.343 13:54:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:53.343 13:54:17 -- common/autotest_common.sh@10 -- # set +x 00:12:53.343 ************************************ 00:12:53.343 START TEST nvme 00:12:53.343 ************************************ 00:12:53.343 13:54:17 nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:12:53.343 * Looking for test storage... 00:12:53.343 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:53.343 13:54:17 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:53.910 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:54.168 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:54.168 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:54.426 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:54.426 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:54.426 13:54:18 nvme -- nvme/nvme.sh@79 -- # uname 00:12:54.426 13:54:18 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:12:54.426 13:54:18 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:12:54.426 13:54:18 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:12:54.426 13:54:18 nvme -- common/autotest_common.sh@1080 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:12:54.426 13:54:18 nvme -- common/autotest_common.sh@1066 -- # _randomize_va_space=2 00:12:54.426 13:54:18 nvme -- common/autotest_common.sh@1067 -- # echo 0 00:12:54.426 Waiting for stub to ready for secondary processes... 00:12:54.426 13:54:18 nvme -- common/autotest_common.sh@1069 -- # stubpid=69861 00:12:54.426 13:54:18 nvme -- common/autotest_common.sh@1068 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:12:54.426 13:54:18 nvme -- common/autotest_common.sh@1070 -- # echo Waiting for stub to ready for secondary processes... 00:12:54.426 13:54:18 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:12:54.426 13:54:18 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/69861 ]] 00:12:54.426 13:54:18 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:12:54.426 [2024-07-15 13:54:18.940082] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:12:54.426 [2024-07-15 13:54:18.940363] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:12:55.361 [2024-07-15 13:54:19.670934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:55.361 13:54:19 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:12:55.361 13:54:19 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/69861 ]] 00:12:55.361 13:54:19 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:12:55.361 [2024-07-15 13:54:19.887581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:55.361 [2024-07-15 13:54:19.887688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:55.361 [2024-07-15 13:54:19.887689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.619 [2024-07-15 13:54:19.909819] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:12:55.619 [2024-07-15 13:54:19.910201] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:55.619 [2024-07-15 13:54:19.919496] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:12:55.619 [2024-07-15 13:54:19.919965] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:12:55.619 [2024-07-15 13:54:19.923257] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:55.619 [2024-07-15 13:54:19.923552] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:12:55.619 [2024-07-15 13:54:19.923690] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:12:55.619 [2024-07-15 13:54:19.926913] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:55.619 [2024-07-15 13:54:19.927151] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:12:55.619 [2024-07-15 13:54:19.927294] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:12:55.619 [2024-07-15 13:54:19.930464] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:55.620 [2024-07-15 13:54:19.930721] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:12:55.620 [2024-07-15 13:54:19.930855] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:12:55.620 [2024-07-15 13:54:19.930975] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:12:55.620 [2024-07-15 13:54:19.931106] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:12:56.554 13:54:20 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:12:56.554 done. 00:12:56.554 13:54:20 nvme -- common/autotest_common.sh@1076 -- # echo done. 00:12:56.554 13:54:20 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:12:56.554 13:54:20 nvme -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:12:56.554 13:54:20 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:56.554 13:54:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.554 ************************************ 00:12:56.554 START TEST nvme_reset 00:12:56.554 ************************************ 00:12:56.554 13:54:20 nvme.nvme_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:12:56.812 Initializing NVMe Controllers 00:12:56.812 Skipping QEMU NVMe SSD at 0000:00:10.0 00:12:56.812 Skipping QEMU NVMe SSD at 0000:00:11.0 00:12:56.812 Skipping QEMU NVMe SSD at 0000:00:13.0 00:12:56.812 Skipping QEMU NVMe SSD at 0000:00:12.0 00:12:56.812 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:12:56.812 00:12:56.812 real 0m0.255s 00:12:56.812 user 0m0.092s 00:12:56.812 sys 0m0.125s 00:12:56.812 13:54:21 nvme.nvme_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:56.812 13:54:21 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:12:56.812 ************************************ 00:12:56.812 END TEST nvme_reset 00:12:56.812 ************************************ 00:12:56.812 13:54:21 nvme -- common/autotest_common.sh@1142 -- # return 0 00:12:56.812 13:54:21 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:12:56.812 13:54:21 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:56.812 13:54:21 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:56.812 13:54:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.812 ************************************ 00:12:56.812 START TEST nvme_identify 00:12:56.812 ************************************ 00:12:56.812 13:54:21 nvme.nvme_identify -- common/autotest_common.sh@1123 -- # nvme_identify 00:12:56.812 13:54:21 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:12:56.812 13:54:21 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:12:56.812 13:54:21 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:12:56.812 13:54:21 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:12:56.812 13:54:21 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:12:56.812 13:54:21 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:12:56.812 13:54:21 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:56.812 13:54:21 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:56.813 13:54:21 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:12:56.813 13:54:21 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:12:56.813 13:54:21 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:56.813 13:54:21 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:12:57.073 [2024-07-15 13:54:21.469202] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 69890 terminated unexpected 00:12:57.073 ===================================================== 00:12:57.073 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:57.073 ===================================================== 00:12:57.073 Controller Capabilities/Features 00:12:57.073 ================================ 00:12:57.073 Vendor ID: 1b36 00:12:57.073 Subsystem Vendor ID: 1af4 00:12:57.073 Serial Number: 12340 00:12:57.073 Model Number: QEMU NVMe Ctrl 00:12:57.073 Firmware Version: 8.0.0 00:12:57.073 Recommended Arb Burst: 6 00:12:57.073 IEEE OUI Identifier: 00 54 52 00:12:57.073 Multi-path I/O 00:12:57.073 May have multiple subsystem ports: No 00:12:57.073 May have multiple controllers: No 00:12:57.073 Associated with SR-IOV VF: No 00:12:57.073 Max Data Transfer Size: 524288 00:12:57.073 Max Number of Namespaces: 256 00:12:57.073 Max Number of I/O Queues: 64 00:12:57.073 NVMe Specification Version (VS): 1.4 00:12:57.073 NVMe Specification Version (Identify): 1.4 00:12:57.073 Maximum Queue Entries: 2048 00:12:57.073 Contiguous Queues Required: Yes 00:12:57.073 Arbitration Mechanisms Supported 00:12:57.073 Weighted Round Robin: Not Supported 00:12:57.073 Vendor Specific: Not Supported 00:12:57.073 Reset Timeout: 7500 ms 00:12:57.073 Doorbell Stride: 4 bytes 00:12:57.073 NVM Subsystem Reset: Not Supported 00:12:57.073 Command Sets Supported 00:12:57.073 NVM Command Set: Supported 00:12:57.073 Boot Partition: Not Supported 00:12:57.073 Memory Page Size Minimum: 4096 bytes 00:12:57.073 Memory Page Size Maximum: 65536 bytes 00:12:57.073 Persistent Memory Region: Not Supported 00:12:57.073 Optional Asynchronous Events Supported 00:12:57.073 Namespace Attribute Notices: Supported 00:12:57.073 Firmware Activation Notices: Not Supported 00:12:57.073 ANA Change Notices: Not Supported 00:12:57.073 PLE Aggregate Log Change Notices: Not Supported 00:12:57.073 LBA Status Info Alert Notices: Not Supported 00:12:57.073 EGE Aggregate Log Change Notices: Not Supported 00:12:57.073 Normal NVM Subsystem Shutdown event: Not Supported 00:12:57.073 Zone Descriptor Change Notices: Not Supported 00:12:57.073 Discovery Log Change Notices: Not Supported 00:12:57.073 Controller Attributes 00:12:57.073 128-bit Host Identifier: Not Supported 00:12:57.073 Non-Operational Permissive Mode: Not Supported 00:12:57.073 NVM Sets: Not Supported 00:12:57.073 Read Recovery Levels: Not Supported 00:12:57.073 Endurance Groups: Not Supported 00:12:57.073 Predictable Latency Mode: Not Supported 00:12:57.073 Traffic Based Keep ALive: Not Supported 00:12:57.073 Namespace Granularity: Not Supported 00:12:57.073 SQ Associations: Not Supported 00:12:57.073 UUID List: Not Supported 00:12:57.073 Multi-Domain Subsystem: Not Supported 00:12:57.073 Fixed Capacity Management: Not Supported 00:12:57.073 Variable Capacity Management: Not Supported 00:12:57.073 Delete Endurance Group: Not Supported 00:12:57.073 Delete NVM Set: Not Supported 00:12:57.073 Extended LBA Formats Supported: Supported 00:12:57.073 Flexible Data Placement Supported: Not Supported 00:12:57.073 00:12:57.073 Controller Memory Buffer Support 00:12:57.073 ================================ 00:12:57.073 Supported: No 00:12:57.073 00:12:57.073 Persistent Memory Region Support 00:12:57.073 ================================ 00:12:57.073 Supported: No 00:12:57.073 00:12:57.073 Admin Command Set Attributes 00:12:57.073 ============================ 00:12:57.073 Security Send/Receive: Not Supported 00:12:57.073 Format NVM: Supported 00:12:57.073 Firmware Activate/Download: Not Supported 00:12:57.073 Namespace Management: Supported 00:12:57.073 Device Self-Test: Not Supported 00:12:57.073 Directives: Supported 00:12:57.073 NVMe-MI: Not Supported 00:12:57.073 Virtualization Management: Not Supported 00:12:57.073 Doorbell Buffer Config: Supported 00:12:57.073 Get LBA Status Capability: Not Supported 00:12:57.073 Command & Feature Lockdown Capability: Not Supported 00:12:57.073 Abort Command Limit: 4 00:12:57.073 Async Event Request Limit: 4 00:12:57.073 Number of Firmware Slots: N/A 00:12:57.073 Firmware Slot 1 Read-Only: N/A 00:12:57.073 Firmware Activation Without Reset: N/A 00:12:57.073 Multiple Update Detection Support: N/A 00:12:57.073 Firmware Update Granularity: No Information Provided 00:12:57.073 Per-Namespace SMART Log: Yes 00:12:57.073 Asymmetric Namespace Access Log Page: Not Supported 00:12:57.073 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:12:57.073 Command Effects Log Page: Supported 00:12:57.073 Get Log Page Extended Data: Supported 00:12:57.073 Telemetry Log Pages: Not Supported 00:12:57.073 Persistent Event Log Pages: Not Supported 00:12:57.073 Supported Log Pages Log Page: May Support 00:12:57.073 Commands Supported & Effects Log Page: Not Supported 00:12:57.073 Feature Identifiers & Effects Log Page:May Support 00:12:57.073 NVMe-MI Commands & Effects Log Page: May Support 00:12:57.073 Data Area 4 for Telemetry Log: Not Supported 00:12:57.073 Error Log Page Entries Supported: 1 00:12:57.073 Keep Alive: Not Supported 00:12:57.073 00:12:57.073 NVM Command Set Attributes 00:12:57.073 ========================== 00:12:57.073 Submission Queue Entry Size 00:12:57.073 Max: 64 00:12:57.073 Min: 64 00:12:57.073 Completion Queue Entry Size 00:12:57.073 Max: 16 00:12:57.073 Min: 16 00:12:57.073 Number of Namespaces: 256 00:12:57.073 Compare Command: Supported 00:12:57.073 Write Uncorrectable Command: Not Supported 00:12:57.073 Dataset Management Command: Supported 00:12:57.073 Write Zeroes Command: Supported 00:12:57.073 Set Features Save Field: Supported 00:12:57.073 Reservations: Not Supported 00:12:57.074 Timestamp: Supported 00:12:57.074 Copy: Supported 00:12:57.074 Volatile Write Cache: Present 00:12:57.074 Atomic Write Unit (Normal): 1 00:12:57.074 Atomic Write Unit (PFail): 1 00:12:57.074 Atomic Compare & Write Unit: 1 00:12:57.074 Fused Compare & Write: Not Supported 00:12:57.074 Scatter-Gather List 00:12:57.074 SGL Command Set: Supported 00:12:57.074 SGL Keyed: Not Supported 00:12:57.074 SGL Bit Bucket Descriptor: Not Supported 00:12:57.074 SGL Metadata Pointer: Not Supported 00:12:57.074 Oversized SGL: Not Supported 00:12:57.074 SGL Metadata Address: Not Supported 00:12:57.074 SGL Offset: Not Supported 00:12:57.074 Transport SGL Data Block: Not Supported 00:12:57.074 Replay Protected Memory Block: Not Supported 00:12:57.074 00:12:57.074 Firmware Slot Information 00:12:57.074 ========================= 00:12:57.074 Active slot: 1 00:12:57.074 Slot 1 Firmware Revision: 1.0 00:12:57.074 00:12:57.074 00:12:57.074 Commands Supported and Effects 00:12:57.074 ============================== 00:12:57.074 Admin Commands 00:12:57.074 -------------- 00:12:57.074 Delete I/O Submission Queue (00h): Supported 00:12:57.074 Create I/O Submission Queue (01h): Supported 00:12:57.074 Get Log Page (02h): Supported 00:12:57.074 Delete I/O Completion Queue (04h): Supported 00:12:57.074 Create I/O Completion Queue (05h): Supported 00:12:57.074 Identify (06h): Supported 00:12:57.074 Abort (08h): Supported 00:12:57.074 Set Features (09h): Supported 00:12:57.074 Get Features (0Ah): Supported 00:12:57.074 Asynchronous Event Request (0Ch): Supported 00:12:57.074 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:57.074 Directive Send (19h): Supported 00:12:57.074 Directive Receive (1Ah): Supported 00:12:57.074 Virtualization Management (1Ch): Supported 00:12:57.074 Doorbell Buffer Config (7Ch): Supported 00:12:57.074 Format NVM (80h): Supported LBA-Change 00:12:57.074 I/O Commands 00:12:57.074 ------------ 00:12:57.074 Flush (00h): Supported LBA-Change 00:12:57.074 Write (01h): Supported LBA-Change 00:12:57.074 Read (02h): Supported 00:12:57.074 Compare (05h): Supported 00:12:57.074 Write Zeroes (08h): Supported LBA-Change 00:12:57.074 Dataset Management (09h): Supported LBA-Change 00:12:57.074 Unknown (0Ch): Supported 00:12:57.074 Unknown (12h): Supported 00:12:57.074 Copy (19h): Supported LBA-Change 00:12:57.074 Unknown (1Dh): Supported LBA-Change 00:12:57.074 00:12:57.074 Error Log 00:12:57.074 ========= 00:12:57.074 00:12:57.074 Arbitration 00:12:57.074 =========== 00:12:57.074 Arbitration Burst: no limit 00:12:57.074 00:12:57.074 Power Management 00:12:57.074 ================ 00:12:57.074 Number of Power States: 1 00:12:57.074 Current Power State: Power State #0 00:12:57.074 Power State #0: 00:12:57.074 Max Power: 25.00 W 00:12:57.074 Non-Operational State: Operational 00:12:57.074 Entry Latency: 16 microseconds 00:12:57.074 Exit Latency: 4 microseconds 00:12:57.074 Relative Read Throughput: 0 00:12:57.074 Relative Read Latency: 0 00:12:57.074 Relative Write Throughput: 0 00:12:57.074 Relative Write Latency: 0 00:12:57.074 Idle Power[2024-07-15 13:54:21.470322] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 69890 terminated unexpected 00:12:57.074 : Not Reported 00:12:57.074 Active Power: Not Reported 00:12:57.074 Non-Operational Permissive Mode: Not Supported 00:12:57.074 00:12:57.074 Health Information 00:12:57.074 ================== 00:12:57.074 Critical Warnings: 00:12:57.074 Available Spare Space: OK 00:12:57.074 Temperature: OK 00:12:57.074 Device Reliability: OK 00:12:57.074 Read Only: No 00:12:57.074 Volatile Memory Backup: OK 00:12:57.074 Current Temperature: 323 Kelvin (50 Celsius) 00:12:57.074 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:57.074 Available Spare: 0% 00:12:57.074 Available Spare Threshold: 0% 00:12:57.074 Life Percentage Used: 0% 00:12:57.074 Data Units Read: 981 00:12:57.074 Data Units Written: 819 00:12:57.074 Host Read Commands: 46937 00:12:57.074 Host Write Commands: 45530 00:12:57.074 Controller Busy Time: 0 minutes 00:12:57.074 Power Cycles: 0 00:12:57.074 Power On Hours: 0 hours 00:12:57.074 Unsafe Shutdowns: 0 00:12:57.074 Unrecoverable Media Errors: 0 00:12:57.074 Lifetime Error Log Entries: 0 00:12:57.074 Warning Temperature Time: 0 minutes 00:12:57.074 Critical Temperature Time: 0 minutes 00:12:57.074 00:12:57.074 Number of Queues 00:12:57.074 ================ 00:12:57.074 Number of I/O Submission Queues: 64 00:12:57.074 Number of I/O Completion Queues: 64 00:12:57.074 00:12:57.074 ZNS Specific Controller Data 00:12:57.074 ============================ 00:12:57.074 Zone Append Size Limit: 0 00:12:57.074 00:12:57.074 00:12:57.074 Active Namespaces 00:12:57.074 ================= 00:12:57.074 Namespace ID:1 00:12:57.074 Error Recovery Timeout: Unlimited 00:12:57.074 Command Set Identifier: NVM (00h) 00:12:57.074 Deallocate: Supported 00:12:57.074 Deallocated/Unwritten Error: Supported 00:12:57.074 Deallocated Read Value: All 0x00 00:12:57.074 Deallocate in Write Zeroes: Not Supported 00:12:57.074 Deallocated Guard Field: 0xFFFF 00:12:57.074 Flush: Supported 00:12:57.074 Reservation: Not Supported 00:12:57.074 Metadata Transferred as: Separate Metadata Buffer 00:12:57.074 Namespace Sharing Capabilities: Private 00:12:57.074 Size (in LBAs): 1548666 (5GiB) 00:12:57.074 Capacity (in LBAs): 1548666 (5GiB) 00:12:57.074 Utilization (in LBAs): 1548666 (5GiB) 00:12:57.074 Thin Provisioning: Not Supported 00:12:57.074 Per-NS Atomic Units: No 00:12:57.074 Maximum Single Source Range Length: 128 00:12:57.074 Maximum Copy Length: 128 00:12:57.074 Maximum Source Range Count: 128 00:12:57.074 NGUID/EUI64 Never Reused: No 00:12:57.074 Namespace Write Protected: No 00:12:57.074 Number of LBA Formats: 8 00:12:57.074 Current LBA Format: LBA Format #07 00:12:57.074 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:57.074 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:57.074 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:57.074 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:57.074 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:57.074 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:57.074 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:57.074 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:57.074 00:12:57.074 NVM Specific Namespace Data 00:12:57.074 =========================== 00:12:57.074 Logical Block Storage Tag Mask: 0 00:12:57.074 Protection Information Capabilities: 00:12:57.074 16b Guard Protection Information Storage Tag Support: No 00:12:57.074 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:57.074 Storage Tag Check Read Support: No 00:12:57.074 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.074 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.074 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.074 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.074 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.074 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.074 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.074 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.074 ===================================================== 00:12:57.074 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:57.074 ===================================================== 00:12:57.074 Controller Capabilities/Features 00:12:57.074 ================================ 00:12:57.075 Vendor ID: 1b36 00:12:57.075 Subsystem Vendor ID: 1af4 00:12:57.075 Serial Number: 12341 00:12:57.075 Model Number: QEMU NVMe Ctrl 00:12:57.075 Firmware Version: 8.0.0 00:12:57.075 Recommended Arb Burst: 6 00:12:57.075 IEEE OUI Identifier: 00 54 52 00:12:57.075 Multi-path I/O 00:12:57.075 May have multiple subsystem ports: No 00:12:57.075 May have multiple controllers: No 00:12:57.075 Associated with SR-IOV VF: No 00:12:57.075 Max Data Transfer Size: 524288 00:12:57.075 Max Number of Namespaces: 256 00:12:57.075 Max Number of I/O Queues: 64 00:12:57.075 NVMe Specification Version (VS): 1.4 00:12:57.075 NVMe Specification Version (Identify): 1.4 00:12:57.075 Maximum Queue Entries: 2048 00:12:57.075 Contiguous Queues Required: Yes 00:12:57.075 Arbitration Mechanisms Supported 00:12:57.075 Weighted Round Robin: Not Supported 00:12:57.075 Vendor Specific: Not Supported 00:12:57.075 Reset Timeout: 7500 ms 00:12:57.075 Doorbell Stride: 4 bytes 00:12:57.075 NVM Subsystem Reset: Not Supported 00:12:57.075 Command Sets Supported 00:12:57.075 NVM Command Set: Supported 00:12:57.075 Boot Partition: Not Supported 00:12:57.075 Memory Page Size Minimum: 4096 bytes 00:12:57.075 Memory Page Size Maximum: 65536 bytes 00:12:57.075 Persistent Memory Region: Not Supported 00:12:57.075 Optional Asynchronous Events Supported 00:12:57.075 Namespace Attribute Notices: Supported 00:12:57.075 Firmware Activation Notices: Not Supported 00:12:57.075 ANA Change Notices: Not Supported 00:12:57.075 PLE Aggregate Log Change Notices: Not Supported 00:12:57.075 LBA Status Info Alert Notices: Not Supported 00:12:57.075 EGE Aggregate Log Change Notices: Not Supported 00:12:57.075 Normal NVM Subsystem Shutdown event: Not Supported 00:12:57.075 Zone Descriptor Change Notices: Not Supported 00:12:57.075 Discovery Log Change Notices: Not Supported 00:12:57.075 Controller Attributes 00:12:57.075 128-bit Host Identifier: Not Supported 00:12:57.075 Non-Operational Permissive Mode: Not Supported 00:12:57.075 NVM Sets: Not Supported 00:12:57.075 Read Recovery Levels: Not Supported 00:12:57.075 Endurance Groups: Not Supported 00:12:57.075 Predictable Latency Mode: Not Supported 00:12:57.075 Traffic Based Keep ALive: Not Supported 00:12:57.075 Namespace Granularity: Not Supported 00:12:57.075 SQ Associations: Not Supported 00:12:57.075 UUID List: Not Supported 00:12:57.075 Multi-Domain Subsystem: Not Supported 00:12:57.075 Fixed Capacity Management: Not Supported 00:12:57.075 Variable Capacity Management: Not Supported 00:12:57.075 Delete Endurance Group: Not Supported 00:12:57.075 Delete NVM Set: Not Supported 00:12:57.075 Extended LBA Formats Supported: Supported 00:12:57.075 Flexible Data Placement Supported: Not Supported 00:12:57.075 00:12:57.075 Controller Memory Buffer Support 00:12:57.075 ================================ 00:12:57.075 Supported: No 00:12:57.075 00:12:57.075 Persistent Memory Region Support 00:12:57.075 ================================ 00:12:57.075 Supported: No 00:12:57.075 00:12:57.075 Admin Command Set Attributes 00:12:57.075 ============================ 00:12:57.075 Security Send/Receive: Not Supported 00:12:57.075 Format NVM: Supported 00:12:57.075 Firmware Activate/Download: Not Supported 00:12:57.075 Namespace Management: Supported 00:12:57.075 Device Self-Test: Not Supported 00:12:57.075 Directives: Supported 00:12:57.075 NVMe-MI: Not Supported 00:12:57.075 Virtualization Management: Not Supported 00:12:57.075 Doorbell Buffer Config: Supported 00:12:57.075 Get LBA Status Capability: Not Supported 00:12:57.075 Command & Feature Lockdown Capability: Not Supported 00:12:57.075 Abort Command Limit: 4 00:12:57.075 Async Event Request Limit: 4 00:12:57.075 Number of Firmware Slots: N/A 00:12:57.075 Firmware Slot 1 Read-Only: N/A 00:12:57.075 Firmware Activation Without Reset: N/A 00:12:57.075 Multiple Update Detection Support: N/A 00:12:57.075 Firmware Update Granularity: No Information Provided 00:12:57.075 Per-Namespace SMART Log: Yes 00:12:57.075 Asymmetric Namespace Access Log Page: Not Supported 00:12:57.075 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:12:57.075 Command Effects Log Page: Supported 00:12:57.075 Get Log Page Extended Data: Supported 00:12:57.075 Telemetry Log Pages: Not Supported 00:12:57.075 Persistent Event Log Pages: Not Supported 00:12:57.075 Supported Log Pages Log Page: May Support 00:12:57.075 Commands Supported & Effects Log Page: Not Supported 00:12:57.075 Feature Identifiers & Effects Log Page:May Support 00:12:57.075 NVMe-MI Commands & Effects Log Page: May Support 00:12:57.075 Data Area 4 for Telemetry Log: Not Supported 00:12:57.075 Error Log Page Entries Supported: 1 00:12:57.075 Keep Alive: Not Supported 00:12:57.075 00:12:57.075 NVM Command Set Attributes 00:12:57.075 ========================== 00:12:57.075 Submission Queue Entry Size 00:12:57.075 Max: 64 00:12:57.075 Min: 64 00:12:57.075 Completion Queue Entry Size 00:12:57.075 Max: 16 00:12:57.075 Min: 16 00:12:57.075 Number of Namespaces: 256 00:12:57.075 Compare Command: Supported 00:12:57.075 Write Uncorrectable Command: Not Supported 00:12:57.075 Dataset Management Command: Supported 00:12:57.075 Write Zeroes Command: Supported 00:12:57.075 Set Features Save Field: Supported 00:12:57.075 Reservations: Not Supported 00:12:57.075 Timestamp: Supported 00:12:57.075 Copy: Supported 00:12:57.075 Volatile Write Cache: Present 00:12:57.075 Atomic Write Unit (Normal): 1 00:12:57.075 Atomic Write Unit (PFail): 1 00:12:57.075 Atomic Compare & Write Unit: 1 00:12:57.075 Fused Compare & Write: Not Supported 00:12:57.075 Scatter-Gather List 00:12:57.075 SGL Command Set: Supported 00:12:57.075 SGL Keyed: Not Supported 00:12:57.075 SGL Bit Bucket Descriptor: Not Supported 00:12:57.075 SGL Metadata Pointer: Not Supported 00:12:57.075 Oversized SGL: Not Supported 00:12:57.075 SGL Metadata Address: Not Supported 00:12:57.075 SGL Offset: Not Supported 00:12:57.075 Transport SGL Data Block: Not Supported 00:12:57.075 Replay Protected Memory Block: Not Supported 00:12:57.075 00:12:57.075 Firmware Slot Information 00:12:57.075 ========================= 00:12:57.075 Active slot: 1 00:12:57.075 Slot 1 Firmware Revision: 1.0 00:12:57.075 00:12:57.075 00:12:57.075 Commands Supported and Effects 00:12:57.075 ============================== 00:12:57.075 Admin Commands 00:12:57.075 -------------- 00:12:57.075 Delete I/O Submission Queue (00h): Supported 00:12:57.075 Create I/O Submission Queue (01h): Supported 00:12:57.075 Get Log Page (02h): Supported 00:12:57.075 Delete I/O Completion Queue (04h): Supported 00:12:57.075 Create I/O Completion Queue (05h): Supported 00:12:57.075 Identify (06h): Supported 00:12:57.075 Abort (08h): Supported 00:12:57.075 Set Features (09h): Supported 00:12:57.075 Get Features (0Ah): Supported 00:12:57.075 Asynchronous Event Request (0Ch): Supported 00:12:57.075 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:57.075 Directive Send (19h): Supported 00:12:57.075 Directive Receive (1Ah): Supported 00:12:57.075 Virtualization Management (1Ch): Supported 00:12:57.075 Doorbell Buffer Config (7Ch): Supported 00:12:57.075 Format NVM (80h): Supported LBA-Change 00:12:57.075 I/O Commands 00:12:57.075 ------------ 00:12:57.075 Flush (00h): Supported LBA-Change 00:12:57.075 Write (01h): Supported LBA-Change 00:12:57.075 Read (02h): Supported 00:12:57.075 Compare (05h): Supported 00:12:57.075 Write Zeroes (08h): Supported LBA-Change 00:12:57.075 Dataset Management (09h): Supported LBA-Change 00:12:57.075 Unknown (0Ch): Supported 00:12:57.075 Unknown (12h): Supported 00:12:57.075 Copy (19h): Supported LBA-Change 00:12:57.075 Unknown (1Dh): Supported LBA-Change 00:12:57.075 00:12:57.075 Error Log 00:12:57.075 ========= 00:12:57.075 00:12:57.075 Arbitration 00:12:57.075 =========== 00:12:57.075 Arbitration Burst: no limit 00:12:57.075 00:12:57.075 Power Management 00:12:57.075 ================ 00:12:57.075 Number of Power States: 1 00:12:57.075 Current Power State: Power State #0 00:12:57.075 Power State #0: 00:12:57.075 Max Power: 25.00 W 00:12:57.075 Non-Operational State: Operational 00:12:57.075 Entry Latency: 16 microseconds 00:12:57.075 Exit Latency: 4 microseconds 00:12:57.075 Relative Read Throughput: 0 00:12:57.075 Relative Read Latency: 0 00:12:57.075 Relative Write Throughput: 0 00:12:57.075 Relative Write Latency: 0 00:12:57.075 Idle Power: Not Reported 00:12:57.075 Active Power: Not Reported 00:12:57.075 Non-Operational Permissive Mode: Not Supported 00:12:57.075 00:12:57.075 Health Information 00:12:57.075 ================== 00:12:57.075 Critical Warnings: 00:12:57.075 Available Spare Space: OK 00:12:57.075 Temperature: [2024-07-15 13:54:21.471384] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 69890 terminated unexpected 00:12:57.075 OK 00:12:57.075 Device Reliability: OK 00:12:57.075 Read Only: No 00:12:57.075 Volatile Memory Backup: OK 00:12:57.075 Current Temperature: 323 Kelvin (50 Celsius) 00:12:57.076 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:57.076 Available Spare: 0% 00:12:57.076 Available Spare Threshold: 0% 00:12:57.076 Life Percentage Used: 0% 00:12:57.076 Data Units Read: 726 00:12:57.076 Data Units Written: 571 00:12:57.076 Host Read Commands: 33507 00:12:57.076 Host Write Commands: 31153 00:12:57.076 Controller Busy Time: 0 minutes 00:12:57.076 Power Cycles: 0 00:12:57.076 Power On Hours: 0 hours 00:12:57.076 Unsafe Shutdowns: 0 00:12:57.076 Unrecoverable Media Errors: 0 00:12:57.076 Lifetime Error Log Entries: 0 00:12:57.076 Warning Temperature Time: 0 minutes 00:12:57.076 Critical Temperature Time: 0 minutes 00:12:57.076 00:12:57.076 Number of Queues 00:12:57.076 ================ 00:12:57.076 Number of I/O Submission Queues: 64 00:12:57.076 Number of I/O Completion Queues: 64 00:12:57.076 00:12:57.076 ZNS Specific Controller Data 00:12:57.076 ============================ 00:12:57.076 Zone Append Size Limit: 0 00:12:57.076 00:12:57.076 00:12:57.076 Active Namespaces 00:12:57.076 ================= 00:12:57.076 Namespace ID:1 00:12:57.076 Error Recovery Timeout: Unlimited 00:12:57.076 Command Set Identifier: NVM (00h) 00:12:57.076 Deallocate: Supported 00:12:57.076 Deallocated/Unwritten Error: Supported 00:12:57.076 Deallocated Read Value: All 0x00 00:12:57.076 Deallocate in Write Zeroes: Not Supported 00:12:57.076 Deallocated Guard Field: 0xFFFF 00:12:57.076 Flush: Supported 00:12:57.076 Reservation: Not Supported 00:12:57.076 Namespace Sharing Capabilities: Private 00:12:57.076 Size (in LBAs): 1310720 (5GiB) 00:12:57.076 Capacity (in LBAs): 1310720 (5GiB) 00:12:57.076 Utilization (in LBAs): 1310720 (5GiB) 00:12:57.076 Thin Provisioning: Not Supported 00:12:57.076 Per-NS Atomic Units: No 00:12:57.076 Maximum Single Source Range Length: 128 00:12:57.076 Maximum Copy Length: 128 00:12:57.076 Maximum Source Range Count: 128 00:12:57.076 NGUID/EUI64 Never Reused: No 00:12:57.076 Namespace Write Protected: No 00:12:57.076 Number of LBA Formats: 8 00:12:57.076 Current LBA Format: LBA Format #04 00:12:57.076 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:57.076 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:57.076 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:57.076 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:57.076 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:57.076 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:57.076 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:57.076 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:57.076 00:12:57.076 NVM Specific Namespace Data 00:12:57.076 =========================== 00:12:57.076 Logical Block Storage Tag Mask: 0 00:12:57.076 Protection Information Capabilities: 00:12:57.076 16b Guard Protection Information Storage Tag Support: No 00:12:57.076 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:57.076 Storage Tag Check Read Support: No 00:12:57.076 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.076 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.076 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.076 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.076 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.076 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.076 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.076 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.076 ===================================================== 00:12:57.076 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:57.076 ===================================================== 00:12:57.076 Controller Capabilities/Features 00:12:57.076 ================================ 00:12:57.076 Vendor ID: 1b36 00:12:57.076 Subsystem Vendor ID: 1af4 00:12:57.076 Serial Number: 12343 00:12:57.076 Model Number: QEMU NVMe Ctrl 00:12:57.076 Firmware Version: 8.0.0 00:12:57.076 Recommended Arb Burst: 6 00:12:57.076 IEEE OUI Identifier: 00 54 52 00:12:57.076 Multi-path I/O 00:12:57.076 May have multiple subsystem ports: No 00:12:57.076 May have multiple controllers: Yes 00:12:57.076 Associated with SR-IOV VF: No 00:12:57.076 Max Data Transfer Size: 524288 00:12:57.076 Max Number of Namespaces: 256 00:12:57.076 Max Number of I/O Queues: 64 00:12:57.076 NVMe Specification Version (VS): 1.4 00:12:57.076 NVMe Specification Version (Identify): 1.4 00:12:57.076 Maximum Queue Entries: 2048 00:12:57.076 Contiguous Queues Required: Yes 00:12:57.076 Arbitration Mechanisms Supported 00:12:57.076 Weighted Round Robin: Not Supported 00:12:57.076 Vendor Specific: Not Supported 00:12:57.076 Reset Timeout: 7500 ms 00:12:57.076 Doorbell Stride: 4 bytes 00:12:57.076 NVM Subsystem Reset: Not Supported 00:12:57.076 Command Sets Supported 00:12:57.076 NVM Command Set: Supported 00:12:57.076 Boot Partition: Not Supported 00:12:57.076 Memory Page Size Minimum: 4096 bytes 00:12:57.076 Memory Page Size Maximum: 65536 bytes 00:12:57.076 Persistent Memory Region: Not Supported 00:12:57.076 Optional Asynchronous Events Supported 00:12:57.076 Namespace Attribute Notices: Supported 00:12:57.076 Firmware Activation Notices: Not Supported 00:12:57.076 ANA Change Notices: Not Supported 00:12:57.076 PLE Aggregate Log Change Notices: Not Supported 00:12:57.076 LBA Status Info Alert Notices: Not Supported 00:12:57.076 EGE Aggregate Log Change Notices: Not Supported 00:12:57.076 Normal NVM Subsystem Shutdown event: Not Supported 00:12:57.076 Zone Descriptor Change Notices: Not Supported 00:12:57.076 Discovery Log Change Notices: Not Supported 00:12:57.076 Controller Attributes 00:12:57.076 128-bit Host Identifier: Not Supported 00:12:57.076 Non-Operational Permissive Mode: Not Supported 00:12:57.076 NVM Sets: Not Supported 00:12:57.076 Read Recovery Levels: Not Supported 00:12:57.076 Endurance Groups: Supported 00:12:57.076 Predictable Latency Mode: Not Supported 00:12:57.076 Traffic Based Keep ALive: Not Supported 00:12:57.076 Namespace Granularity: Not Supported 00:12:57.076 SQ Associations: Not Supported 00:12:57.076 UUID List: Not Supported 00:12:57.077 Multi-Domain Subsystem: Not Supported 00:12:57.077 Fixed Capacity Management: Not Supported 00:12:57.077 Variable Capacity Management: Not Supported 00:12:57.077 Delete Endurance Group: Not Supported 00:12:57.077 Delete NVM Set: Not Supported 00:12:57.077 Extended LBA Formats Supported: Supported 00:12:57.077 Flexible Data Placement Supported: Supported 00:12:57.077 00:12:57.077 Controller Memory Buffer Support 00:12:57.077 ================================ 00:12:57.077 Supported: No 00:12:57.077 00:12:57.077 Persistent Memory Region Support 00:12:57.077 ================================ 00:12:57.077 Supported: No 00:12:57.077 00:12:57.077 Admin Command Set Attributes 00:12:57.077 ============================ 00:12:57.077 Security Send/Receive: Not Supported 00:12:57.077 Format NVM: Supported 00:12:57.077 Firmware Activate/Download: Not Supported 00:12:57.077 Namespace Management: Supported 00:12:57.077 Device Self-Test: Not Supported 00:12:57.077 Directives: Supported 00:12:57.077 NVMe-MI: Not Supported 00:12:57.077 Virtualization Management: Not Supported 00:12:57.077 Doorbell Buffer Config: Supported 00:12:57.077 Get LBA Status Capability: Not Supported 00:12:57.077 Command & Feature Lockdown Capability: Not Supported 00:12:57.077 Abort Command Limit: 4 00:12:57.077 Async Event Request Limit: 4 00:12:57.077 Number of Firmware Slots: N/A 00:12:57.077 Firmware Slot 1 Read-Only: N/A 00:12:57.077 Firmware Activation Without Reset: N/A 00:12:57.077 Multiple Update Detection Support: N/A 00:12:57.077 Firmware Update Granularity: No Information Provided 00:12:57.077 Per-Namespace SMART Log: Yes 00:12:57.077 Asymmetric Namespace Access Log Page: Not Supported 00:12:57.077 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:12:57.077 Command Effects Log Page: Supported 00:12:57.077 Get Log Page Extended Data: Supported 00:12:57.077 Telemetry Log Pages: Not Supported 00:12:57.077 Persistent Event Log Pages: Not Supported 00:12:57.077 Supported Log Pages Log Page: May Support 00:12:57.077 Commands Supported & Effects Log Page: Not Supported 00:12:57.077 Feature Identifiers & Effects Log Page:May Support 00:12:57.077 NVMe-MI Commands & Effects Log Page: May Support 00:12:57.077 Data Area 4 for Telemetry Log: Not Supported 00:12:57.077 Error Log Page Entries Supported: 1 00:12:57.077 Keep Alive: Not Supported 00:12:57.077 00:12:57.077 NVM Command Set Attributes 00:12:57.077 ========================== 00:12:57.077 Submission Queue Entry Size 00:12:57.077 Max: 64 00:12:57.077 Min: 64 00:12:57.077 Completion Queue Entry Size 00:12:57.077 Max: 16 00:12:57.077 Min: 16 00:12:57.077 Number of Namespaces: 256 00:12:57.077 Compare Command: Supported 00:12:57.077 Write Uncorrectable Command: Not Supported 00:12:57.077 Dataset Management Command: Supported 00:12:57.077 Write Zeroes Command: Supported 00:12:57.077 Set Features Save Field: Supported 00:12:57.077 Reservations: Not Supported 00:12:57.077 Timestamp: Supported 00:12:57.077 Copy: Supported 00:12:57.077 Volatile Write Cache: Present 00:12:57.077 Atomic Write Unit (Normal): 1 00:12:57.077 Atomic Write Unit (PFail): 1 00:12:57.077 Atomic Compare & Write Unit: 1 00:12:57.077 Fused Compare & Write: Not Supported 00:12:57.077 Scatter-Gather List 00:12:57.077 SGL Command Set: Supported 00:12:57.077 SGL Keyed: Not Supported 00:12:57.077 SGL Bit Bucket Descriptor: Not Supported 00:12:57.077 SGL Metadata Pointer: Not Supported 00:12:57.077 Oversized SGL: Not Supported 00:12:57.077 SGL Metadata Address: Not Supported 00:12:57.077 SGL Offset: Not Supported 00:12:57.077 Transport SGL Data Block: Not Supported 00:12:57.077 Replay Protected Memory Block: Not Supported 00:12:57.077 00:12:57.077 Firmware Slot Information 00:12:57.077 ========================= 00:12:57.077 Active slot: 1 00:12:57.077 Slot 1 Firmware Revision: 1.0 00:12:57.077 00:12:57.077 00:12:57.077 Commands Supported and Effects 00:12:57.077 ============================== 00:12:57.077 Admin Commands 00:12:57.077 -------------- 00:12:57.077 Delete I/O Submission Queue (00h): Supported 00:12:57.077 Create I/O Submission Queue (01h): Supported 00:12:57.077 Get Log Page (02h): Supported 00:12:57.077 Delete I/O Completion Queue (04h): Supported 00:12:57.077 Create I/O Completion Queue (05h): Supported 00:12:57.077 Identify (06h): Supported 00:12:57.077 Abort (08h): Supported 00:12:57.077 Set Features (09h): Supported 00:12:57.077 Get Features (0Ah): Supported 00:12:57.077 Asynchronous Event Request (0Ch): Supported 00:12:57.077 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:57.077 Directive Send (19h): Supported 00:12:57.077 Directive Receive (1Ah): Supported 00:12:57.077 Virtualization Management (1Ch): Supported 00:12:57.077 Doorbell Buffer Config (7Ch): Supported 00:12:57.077 Format NVM (80h): Supported LBA-Change 00:12:57.077 I/O Commands 00:12:57.077 ------------ 00:12:57.077 Flush (00h): Supported LBA-Change 00:12:57.077 Write (01h): Supported LBA-Change 00:12:57.077 Read (02h): Supported 00:12:57.077 Compare (05h): Supported 00:12:57.077 Write Zeroes (08h): Supported LBA-Change 00:12:57.077 Dataset Management (09h): Supported LBA-Change 00:12:57.077 Unknown (0Ch): Supported 00:12:57.077 Unknown (12h): Supported 00:12:57.077 Copy (19h): Supported LBA-Change 00:12:57.077 Unknown (1Dh): Supported LBA-Change 00:12:57.077 00:12:57.077 Error Log 00:12:57.077 ========= 00:12:57.077 00:12:57.077 Arbitration 00:12:57.077 =========== 00:12:57.077 Arbitration Burst: no limit 00:12:57.077 00:12:57.077 Power Management 00:12:57.077 ================ 00:12:57.077 Number of Power States: 1 00:12:57.077 Current Power State: Power State #0 00:12:57.077 Power State #0: 00:12:57.077 Max Power: 25.00 W 00:12:57.077 Non-Operational State: Operational 00:12:57.077 Entry Latency: 16 microseconds 00:12:57.077 Exit Latency: 4 microseconds 00:12:57.077 Relative Read Throughput: 0 00:12:57.077 Relative Read Latency: 0 00:12:57.077 Relative Write Throughput: 0 00:12:57.077 Relative Write Latency: 0 00:12:57.077 Idle Power: Not Reported 00:12:57.077 Active Power: Not Reported 00:12:57.077 Non-Operational Permissive Mode: Not Supported 00:12:57.077 00:12:57.077 Health Information 00:12:57.077 ================== 00:12:57.077 Critical Warnings: 00:12:57.077 Available Spare Space: OK 00:12:57.077 Temperature: OK 00:12:57.077 Device Reliability: OK 00:12:57.077 Read Only: No 00:12:57.077 Volatile Memory Backup: OK 00:12:57.077 Current Temperature: 323 Kelvin (50 Celsius) 00:12:57.077 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:57.077 Available Spare: 0% 00:12:57.077 Available Spare Threshold: 0% 00:12:57.077 Life Percentage Used: 0% 00:12:57.077 Data Units Read: 765 00:12:57.077 Data Units Written: 658 00:12:57.077 Host Read Commands: 33308 00:12:57.077 Host Write Commands: 31898 00:12:57.077 Controller Busy Time: 0 minutes 00:12:57.077 Power Cycles: 0 00:12:57.077 Power On Hours: 0 hours 00:12:57.077 Unsafe Shutdowns: 0 00:12:57.077 Unrecoverable Media Errors: 0 00:12:57.077 Lifetime Error Log Entries: 0 00:12:57.077 Warning Temperature Time: 0 minutes 00:12:57.077 Critical Temperature Time: 0 minutes 00:12:57.077 00:12:57.077 Number of Queues 00:12:57.077 ================ 00:12:57.077 Number of I/O Submission Queues: 64 00:12:57.077 Number of I/O Completion Queues: 64 00:12:57.077 00:12:57.077 ZNS Specific Controller Data 00:12:57.077 ============================ 00:12:57.077 Zone Append Size Limit: 0 00:12:57.077 00:12:57.077 00:12:57.077 Active Namespaces 00:12:57.077 ================= 00:12:57.077 Namespace ID:1 00:12:57.077 Error Recovery Timeout: Unlimited 00:12:57.077 Command Set Identifier: NVM (00h) 00:12:57.077 Deallocate: Supported 00:12:57.077 Deallocated/Unwritten Error: Supported 00:12:57.077 Deallocated Read Value: All 0x00 00:12:57.077 Deallocate in Write Zeroes: Not Supported 00:12:57.077 Deallocated Guard Field: 0xFFFF 00:12:57.077 Flush: Supported 00:12:57.077 Reservation: Not Supported 00:12:57.077 Namespace Sharing Capabilities: Multiple Controllers 00:12:57.077 Size (in LBAs): 262144 (1GiB) 00:12:57.077 Capacity (in LBAs): 262144 (1GiB) 00:12:57.077 Utilization (in LBAs): 262144 (1GiB) 00:12:57.077 Thin Provisioning: Not Supported 00:12:57.077 Per-NS Atomic Units: No 00:12:57.077 Maximum Single Source Range Length: 128 00:12:57.077 Maximum Copy Length: 128 00:12:57.077 Maximum Source Range Count: 128 00:12:57.077 NGUID/EUI64 Never Reused: No 00:12:57.077 Namespace Write Protected: No 00:12:57.077 Endurance group ID: 1 00:12:57.077 Number of LBA Formats: 8 00:12:57.077 Current LBA Format: LBA Format #04 00:12:57.077 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:57.077 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:57.077 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:57.077 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:57.077 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:57.077 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:57.077 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:57.077 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:57.077 00:12:57.077 Get Feature FDP: 00:12:57.077 ================ 00:12:57.077 Enabled: Yes 00:12:57.078 FDP configuration index: 0 00:12:57.078 00:12:57.078 FDP configurations log page 00:12:57.078 =========================== 00:12:57.078 Number of FDP configurations: 1 00:12:57.078 Version: 0 00:12:57.078 Size: 112 00:12:57.078 FDP Configuration Descriptor: 0 00:12:57.078 Descriptor Size: 96 00:12:57.078 Reclaim Group Identifier format: 2 00:12:57.078 FDP Volatile Write Cache: Not Present 00:12:57.078 FDP Configuration: Valid 00:12:57.078 Vendor Specific Size: 0 00:12:57.078 Number of Reclaim Groups: 2 00:12:57.078 Number of Recalim Unit Handles: 8 00:12:57.078 Max Placement Identifiers: 128 00:12:57.078 Number of Namespaces Suppprted: 256 00:12:57.078 Reclaim unit Nominal Size: 6000000 bytes 00:12:57.078 Estimated Reclaim Unit Time Limit: Not Reported 00:12:57.078 RUH Desc #000: RUH Type: Initially Isolated 00:12:57.078 RUH Desc #001: RUH Type: Initially Isolated 00:12:57.078 RUH Desc #002: RUH Type: Initially Isolated 00:12:57.078 RUH Desc #003: RUH Type: Initially Isolated 00:12:57.078 RUH Desc #004: RUH Type: Initially Isolated 00:12:57.078 RUH Desc #005: RUH Type: Initially Isolated 00:12:57.078 RUH Desc #006: RUH Type: Initially Isolated 00:12:57.078 RUH Desc #007: RUH Type: Initially Isolated 00:12:57.078 00:12:57.078 FDP reclaim unit handle usage log page 00:12:57.078 ====================================== 00:12:57.078 Number of Reclaim Unit Handles: 8 00:12:57.078 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:57.078 RUH Usage Desc #001: RUH Attributes: Unused 00:12:57.078 RUH Usage Desc #002: RUH Attributes: Unused 00:12:57.078 RUH Usage Desc #003: RUH Attributes: Unused 00:12:57.078 RUH Usage Desc #004: RUH Attributes: Unused 00:12:57.078 RUH Usage Desc #005: RUH Attributes: Unused 00:12:57.078 RUH Usage Desc #006: RUH Attributes: Unused 00:12:57.078 RUH Usage Desc #007: RUH Attributes: Unused 00:12:57.078 00:12:57.078 FDP statistics log page 00:12:57.078 ======================= 00:12:57.078 Host bytes with metadata written: 406298624 00:12:57.078 Media[2024-07-15 13:54:21.473206] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 69890 terminated unexpected 00:12:57.078 bytes with metadata written: 406343680 00:12:57.078 Media bytes erased: 0 00:12:57.078 00:12:57.078 FDP events log page 00:12:57.078 =================== 00:12:57.078 Number of FDP events: 0 00:12:57.078 00:12:57.078 NVM Specific Namespace Data 00:12:57.078 =========================== 00:12:57.078 Logical Block Storage Tag Mask: 0 00:12:57.078 Protection Information Capabilities: 00:12:57.078 16b Guard Protection Information Storage Tag Support: No 00:12:57.078 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:57.078 Storage Tag Check Read Support: No 00:12:57.078 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.078 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.078 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.078 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.078 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.078 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.078 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.078 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.078 ===================================================== 00:12:57.078 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:57.078 ===================================================== 00:12:57.078 Controller Capabilities/Features 00:12:57.078 ================================ 00:12:57.078 Vendor ID: 1b36 00:12:57.078 Subsystem Vendor ID: 1af4 00:12:57.078 Serial Number: 12342 00:12:57.078 Model Number: QEMU NVMe Ctrl 00:12:57.078 Firmware Version: 8.0.0 00:12:57.078 Recommended Arb Burst: 6 00:12:57.078 IEEE OUI Identifier: 00 54 52 00:12:57.078 Multi-path I/O 00:12:57.078 May have multiple subsystem ports: No 00:12:57.078 May have multiple controllers: No 00:12:57.078 Associated with SR-IOV VF: No 00:12:57.078 Max Data Transfer Size: 524288 00:12:57.078 Max Number of Namespaces: 256 00:12:57.078 Max Number of I/O Queues: 64 00:12:57.078 NVMe Specification Version (VS): 1.4 00:12:57.078 NVMe Specification Version (Identify): 1.4 00:12:57.078 Maximum Queue Entries: 2048 00:12:57.078 Contiguous Queues Required: Yes 00:12:57.078 Arbitration Mechanisms Supported 00:12:57.078 Weighted Round Robin: Not Supported 00:12:57.078 Vendor Specific: Not Supported 00:12:57.078 Reset Timeout: 7500 ms 00:12:57.078 Doorbell Stride: 4 bytes 00:12:57.078 NVM Subsystem Reset: Not Supported 00:12:57.078 Command Sets Supported 00:12:57.078 NVM Command Set: Supported 00:12:57.078 Boot Partition: Not Supported 00:12:57.078 Memory Page Size Minimum: 4096 bytes 00:12:57.078 Memory Page Size Maximum: 65536 bytes 00:12:57.078 Persistent Memory Region: Not Supported 00:12:57.078 Optional Asynchronous Events Supported 00:12:57.078 Namespace Attribute Notices: Supported 00:12:57.078 Firmware Activation Notices: Not Supported 00:12:57.078 ANA Change Notices: Not Supported 00:12:57.078 PLE Aggregate Log Change Notices: Not Supported 00:12:57.078 LBA Status Info Alert Notices: Not Supported 00:12:57.078 EGE Aggregate Log Change Notices: Not Supported 00:12:57.078 Normal NVM Subsystem Shutdown event: Not Supported 00:12:57.078 Zone Descriptor Change Notices: Not Supported 00:12:57.078 Discovery Log Change Notices: Not Supported 00:12:57.078 Controller Attributes 00:12:57.079 128-bit Host Identifier: Not Supported 00:12:57.079 Non-Operational Permissive Mode: Not Supported 00:12:57.079 NVM Sets: Not Supported 00:12:57.079 Read Recovery Levels: Not Supported 00:12:57.079 Endurance Groups: Not Supported 00:12:57.079 Predictable Latency Mode: Not Supported 00:12:57.079 Traffic Based Keep ALive: Not Supported 00:12:57.079 Namespace Granularity: Not Supported 00:12:57.079 SQ Associations: Not Supported 00:12:57.079 UUID List: Not Supported 00:12:57.079 Multi-Domain Subsystem: Not Supported 00:12:57.079 Fixed Capacity Management: Not Supported 00:12:57.079 Variable Capacity Management: Not Supported 00:12:57.079 Delete Endurance Group: Not Supported 00:12:57.079 Delete NVM Set: Not Supported 00:12:57.079 Extended LBA Formats Supported: Supported 00:12:57.079 Flexible Data Placement Supported: Not Supported 00:12:57.079 00:12:57.079 Controller Memory Buffer Support 00:12:57.079 ================================ 00:12:57.079 Supported: No 00:12:57.079 00:12:57.079 Persistent Memory Region Support 00:12:57.079 ================================ 00:12:57.079 Supported: No 00:12:57.079 00:12:57.079 Admin Command Set Attributes 00:12:57.079 ============================ 00:12:57.079 Security Send/Receive: Not Supported 00:12:57.079 Format NVM: Supported 00:12:57.079 Firmware Activate/Download: Not Supported 00:12:57.079 Namespace Management: Supported 00:12:57.079 Device Self-Test: Not Supported 00:12:57.079 Directives: Supported 00:12:57.079 NVMe-MI: Not Supported 00:12:57.079 Virtualization Management: Not Supported 00:12:57.079 Doorbell Buffer Config: Supported 00:12:57.079 Get LBA Status Capability: Not Supported 00:12:57.079 Command & Feature Lockdown Capability: Not Supported 00:12:57.079 Abort Command Limit: 4 00:12:57.079 Async Event Request Limit: 4 00:12:57.079 Number of Firmware Slots: N/A 00:12:57.079 Firmware Slot 1 Read-Only: N/A 00:12:57.079 Firmware Activation Without Reset: N/A 00:12:57.079 Multiple Update Detection Support: N/A 00:12:57.079 Firmware Update Granularity: No Information Provided 00:12:57.079 Per-Namespace SMART Log: Yes 00:12:57.079 Asymmetric Namespace Access Log Page: Not Supported 00:12:57.079 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:12:57.079 Command Effects Log Page: Supported 00:12:57.079 Get Log Page Extended Data: Supported 00:12:57.079 Telemetry Log Pages: Not Supported 00:12:57.079 Persistent Event Log Pages: Not Supported 00:12:57.079 Supported Log Pages Log Page: May Support 00:12:57.079 Commands Supported & Effects Log Page: Not Supported 00:12:57.079 Feature Identifiers & Effects Log Page:May Support 00:12:57.079 NVMe-MI Commands & Effects Log Page: May Support 00:12:57.079 Data Area 4 for Telemetry Log: Not Supported 00:12:57.079 Error Log Page Entries Supported: 1 00:12:57.079 Keep Alive: Not Supported 00:12:57.079 00:12:57.079 NVM Command Set Attributes 00:12:57.079 ========================== 00:12:57.079 Submission Queue Entry Size 00:12:57.079 Max: 64 00:12:57.079 Min: 64 00:12:57.079 Completion Queue Entry Size 00:12:57.079 Max: 16 00:12:57.079 Min: 16 00:12:57.079 Number of Namespaces: 256 00:12:57.079 Compare Command: Supported 00:12:57.079 Write Uncorrectable Command: Not Supported 00:12:57.079 Dataset Management Command: Supported 00:12:57.079 Write Zeroes Command: Supported 00:12:57.079 Set Features Save Field: Supported 00:12:57.079 Reservations: Not Supported 00:12:57.079 Timestamp: Supported 00:12:57.079 Copy: Supported 00:12:57.079 Volatile Write Cache: Present 00:12:57.079 Atomic Write Unit (Normal): 1 00:12:57.079 Atomic Write Unit (PFail): 1 00:12:57.079 Atomic Compare & Write Unit: 1 00:12:57.079 Fused Compare & Write: Not Supported 00:12:57.079 Scatter-Gather List 00:12:57.079 SGL Command Set: Supported 00:12:57.079 SGL Keyed: Not Supported 00:12:57.079 SGL Bit Bucket Descriptor: Not Supported 00:12:57.079 SGL Metadata Pointer: Not Supported 00:12:57.079 Oversized SGL: Not Supported 00:12:57.079 SGL Metadata Address: Not Supported 00:12:57.079 SGL Offset: Not Supported 00:12:57.079 Transport SGL Data Block: Not Supported 00:12:57.079 Replay Protected Memory Block: Not Supported 00:12:57.079 00:12:57.079 Firmware Slot Information 00:12:57.079 ========================= 00:12:57.079 Active slot: 1 00:12:57.079 Slot 1 Firmware Revision: 1.0 00:12:57.079 00:12:57.079 00:12:57.079 Commands Supported and Effects 00:12:57.079 ============================== 00:12:57.079 Admin Commands 00:12:57.079 -------------- 00:12:57.079 Delete I/O Submission Queue (00h): Supported 00:12:57.079 Create I/O Submission Queue (01h): Supported 00:12:57.079 Get Log Page (02h): Supported 00:12:57.079 Delete I/O Completion Queue (04h): Supported 00:12:57.079 Create I/O Completion Queue (05h): Supported 00:12:57.079 Identify (06h): Supported 00:12:57.079 Abort (08h): Supported 00:12:57.079 Set Features (09h): Supported 00:12:57.079 Get Features (0Ah): Supported 00:12:57.079 Asynchronous Event Request (0Ch): Supported 00:12:57.079 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:57.079 Directive Send (19h): Supported 00:12:57.079 Directive Receive (1Ah): Supported 00:12:57.079 Virtualization Management (1Ch): Supported 00:12:57.079 Doorbell Buffer Config (7Ch): Supported 00:12:57.079 Format NVM (80h): Supported LBA-Change 00:12:57.079 I/O Commands 00:12:57.079 ------------ 00:12:57.079 Flush (00h): Supported LBA-Change 00:12:57.079 Write (01h): Supported LBA-Change 00:12:57.079 Read (02h): Supported 00:12:57.079 Compare (05h): Supported 00:12:57.079 Write Zeroes (08h): Supported LBA-Change 00:12:57.079 Dataset Management (09h): Supported LBA-Change 00:12:57.079 Unknown (0Ch): Supported 00:12:57.079 Unknown (12h): Supported 00:12:57.079 Copy (19h): Supported LBA-Change 00:12:57.079 Unknown (1Dh): Supported LBA-Change 00:12:57.079 00:12:57.079 Error Log 00:12:57.079 ========= 00:12:57.079 00:12:57.079 Arbitration 00:12:57.079 =========== 00:12:57.079 Arbitration Burst: no limit 00:12:57.079 00:12:57.079 Power Management 00:12:57.079 ================ 00:12:57.079 Number of Power States: 1 00:12:57.079 Current Power State: Power State #0 00:12:57.079 Power State #0: 00:12:57.079 Max Power: 25.00 W 00:12:57.079 Non-Operational State: Operational 00:12:57.079 Entry Latency: 16 microseconds 00:12:57.079 Exit Latency: 4 microseconds 00:12:57.079 Relative Read Throughput: 0 00:12:57.079 Relative Read Latency: 0 00:12:57.079 Relative Write Throughput: 0 00:12:57.079 Relative Write Latency: 0 00:12:57.079 Idle Power: Not Reported 00:12:57.079 Active Power: Not Reported 00:12:57.079 Non-Operational Permissive Mode: Not Supported 00:12:57.079 00:12:57.079 Health Information 00:12:57.079 ================== 00:12:57.079 Critical Warnings: 00:12:57.079 Available Spare Space: OK 00:12:57.079 Temperature: OK 00:12:57.079 Device Reliability: OK 00:12:57.079 Read Only: No 00:12:57.079 Volatile Memory Backup: OK 00:12:57.079 Current Temperature: 323 Kelvin (50 Celsius) 00:12:57.079 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:57.079 Available Spare: 0% 00:12:57.079 Available Spare Threshold: 0% 00:12:57.079 Life Percentage Used: 0% 00:12:57.079 Data Units Read: 2079 00:12:57.079 Data Units Written: 1759 00:12:57.079 Host Read Commands: 98246 00:12:57.079 Host Write Commands: 94016 00:12:57.079 Controller Busy Time: 0 minutes 00:12:57.079 Power Cycles: 0 00:12:57.079 Power On Hours: 0 hours 00:12:57.079 Unsafe Shutdowns: 0 00:12:57.079 Unrecoverable Media Errors: 0 00:12:57.079 Lifetime Error Log Entries: 0 00:12:57.079 Warning Temperature Time: 0 minutes 00:12:57.079 Critical Temperature Time: 0 minutes 00:12:57.079 00:12:57.079 Number of Queues 00:12:57.079 ================ 00:12:57.079 Number of I/O Submission Queues: 64 00:12:57.079 Number of I/O Completion Queues: 64 00:12:57.079 00:12:57.079 ZNS Specific Controller Data 00:12:57.079 ============================ 00:12:57.079 Zone Append Size Limit: 0 00:12:57.079 00:12:57.079 00:12:57.079 Active Namespaces 00:12:57.079 ================= 00:12:57.079 Namespace ID:1 00:12:57.079 Error Recovery Timeout: Unlimited 00:12:57.079 Command Set Identifier: NVM (00h) 00:12:57.079 Deallocate: Supported 00:12:57.079 Deallocated/Unwritten Error: Supported 00:12:57.079 Deallocated Read Value: All 0x00 00:12:57.079 Deallocate in Write Zeroes: Not Supported 00:12:57.079 Deallocated Guard Field: 0xFFFF 00:12:57.079 Flush: Supported 00:12:57.079 Reservation: Not Supported 00:12:57.079 Namespace Sharing Capabilities: Private 00:12:57.079 Size (in LBAs): 1048576 (4GiB) 00:12:57.079 Capacity (in LBAs): 1048576 (4GiB) 00:12:57.079 Utilization (in LBAs): 1048576 (4GiB) 00:12:57.079 Thin Provisioning: Not Supported 00:12:57.079 Per-NS Atomic Units: No 00:12:57.079 Maximum Single Source Range Length: 128 00:12:57.079 Maximum Copy Length: 128 00:12:57.079 Maximum Source Range Count: 128 00:12:57.079 NGUID/EUI64 Never Reused: No 00:12:57.079 Namespace Write Protected: No 00:12:57.079 Number of LBA Formats: 8 00:12:57.079 Current LBA Format: LBA Format #04 00:12:57.079 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:57.079 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:57.080 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:57.080 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:57.080 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:57.080 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:57.080 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:57.080 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:57.080 00:12:57.080 NVM Specific Namespace Data 00:12:57.080 =========================== 00:12:57.080 Logical Block Storage Tag Mask: 0 00:12:57.080 Protection Information Capabilities: 00:12:57.080 16b Guard Protection Information Storage Tag Support: No 00:12:57.080 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:57.080 Storage Tag Check Read Support: No 00:12:57.080 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.080 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.080 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.080 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.080 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.080 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.080 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.080 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.080 Namespace ID:2 00:12:57.080 Error Recovery Timeout: Unlimited 00:12:57.080 Command Set Identifier: NVM (00h) 00:12:57.080 Deallocate: Supported 00:12:57.080 Deallocated/Unwritten Error: Supported 00:12:57.080 Deallocated Read Value: All 0x00 00:12:57.080 Deallocate in Write Zeroes: Not Supported 00:12:57.080 Deallocated Guard Field: 0xFFFF 00:12:57.080 Flush: Supported 00:12:57.080 Reservation: Not Supported 00:12:57.080 Namespace Sharing Capabilities: Private 00:12:57.080 Size (in LBAs): 1048576 (4GiB) 00:12:57.080 Capacity (in LBAs): 1048576 (4GiB) 00:12:57.080 Utilization (in LBAs): 1048576 (4GiB) 00:12:57.080 Thin Provisioning: Not Supported 00:12:57.080 Per-NS Atomic Units: No 00:12:57.080 Maximum Single Source Range Length: 128 00:12:57.080 Maximum Copy Length: 128 00:12:57.080 Maximum Source Range Count: 128 00:12:57.080 NGUID/EUI64 Never Reused: No 00:12:57.080 Namespace Write Protected: No 00:12:57.080 Number of LBA Formats: 8 00:12:57.080 Current LBA Format: LBA Format #04 00:12:57.080 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:57.080 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:57.080 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:57.080 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:57.080 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:57.080 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:57.080 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:57.080 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:57.080 00:12:57.080 NVM Specific Namespace Data 00:12:57.080 =========================== 00:12:57.080 Logical Block Storage Tag Mask: 0 00:12:57.080 Protection Information Capabilities: 00:12:57.080 16b Guard Protection Information Storage Tag Support: No 00:12:57.080 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:57.080 Storage Tag Check Read Support: No 00:12:57.080 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.080 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.080 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.080 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.080 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.080 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.080 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.080 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.080 Namespace ID:3 00:12:57.080 Error Recovery Timeout: Unlimited 00:12:57.080 Command Set Identifier: NVM (00h) 00:12:57.080 Deallocate: Supported 00:12:57.080 Deallocated/Unwritten Error: Supported 00:12:57.080 Deallocated Read Value: All 0x00 00:12:57.080 Deallocate in Write Zeroes: Not Supported 00:12:57.080 Deallocated Guard Field: 0xFFFF 00:12:57.080 Flush: Supported 00:12:57.080 Reservation: Not Supported 00:12:57.080 Namespace Sharing Capabilities: Private 00:12:57.080 Size (in LBAs): 1048576 (4GiB) 00:12:57.080 Capacity (in LBAs): 1048576 (4GiB) 00:12:57.080 Utilization (in LBAs): 1048576 (4GiB) 00:12:57.080 Thin Provisioning: Not Supported 00:12:57.080 Per-NS Atomic Units: No 00:12:57.080 Maximum Single Source Range Length: 128 00:12:57.080 Maximum Copy Length: 128 00:12:57.080 Maximum Source Range Count: 128 00:12:57.080 NGUID/EUI64 Never Reused: No 00:12:57.080 Namespace Write Protected: No 00:12:57.080 Number of LBA Formats: 8 00:12:57.080 Current LBA Format: LBA Format #04 00:12:57.080 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:57.080 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:57.080 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:57.080 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:57.080 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:57.080 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:57.080 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:57.080 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:57.080 00:12:57.080 NVM Specific Namespace Data 00:12:57.080 =========================== 00:12:57.080 Logical Block Storage Tag Mask: 0 00:12:57.080 Protection Information Capabilities: 00:12:57.080 16b Guard Protection Information Storage Tag Support: No 00:12:57.080 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:57.080 Storage Tag Check Read Support: No 00:12:57.080 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.080 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.080 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.080 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.080 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.080 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.080 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.080 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.080 13:54:21 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:57.080 13:54:21 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:12:57.339 ===================================================== 00:12:57.339 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:57.339 ===================================================== 00:12:57.339 Controller Capabilities/Features 00:12:57.339 ================================ 00:12:57.339 Vendor ID: 1b36 00:12:57.339 Subsystem Vendor ID: 1af4 00:12:57.339 Serial Number: 12340 00:12:57.339 Model Number: QEMU NVMe Ctrl 00:12:57.339 Firmware Version: 8.0.0 00:12:57.339 Recommended Arb Burst: 6 00:12:57.339 IEEE OUI Identifier: 00 54 52 00:12:57.339 Multi-path I/O 00:12:57.339 May have multiple subsystem ports: No 00:12:57.339 May have multiple controllers: No 00:12:57.339 Associated with SR-IOV VF: No 00:12:57.339 Max Data Transfer Size: 524288 00:12:57.339 Max Number of Namespaces: 256 00:12:57.339 Max Number of I/O Queues: 64 00:12:57.339 NVMe Specification Version (VS): 1.4 00:12:57.339 NVMe Specification Version (Identify): 1.4 00:12:57.339 Maximum Queue Entries: 2048 00:12:57.339 Contiguous Queues Required: Yes 00:12:57.339 Arbitration Mechanisms Supported 00:12:57.339 Weighted Round Robin: Not Supported 00:12:57.339 Vendor Specific: Not Supported 00:12:57.339 Reset Timeout: 7500 ms 00:12:57.339 Doorbell Stride: 4 bytes 00:12:57.339 NVM Subsystem Reset: Not Supported 00:12:57.339 Command Sets Supported 00:12:57.339 NVM Command Set: Supported 00:12:57.339 Boot Partition: Not Supported 00:12:57.339 Memory Page Size Minimum: 4096 bytes 00:12:57.339 Memory Page Size Maximum: 65536 bytes 00:12:57.339 Persistent Memory Region: Not Supported 00:12:57.339 Optional Asynchronous Events Supported 00:12:57.339 Namespace Attribute Notices: Supported 00:12:57.339 Firmware Activation Notices: Not Supported 00:12:57.339 ANA Change Notices: Not Supported 00:12:57.339 PLE Aggregate Log Change Notices: Not Supported 00:12:57.339 LBA Status Info Alert Notices: Not Supported 00:12:57.339 EGE Aggregate Log Change Notices: Not Supported 00:12:57.339 Normal NVM Subsystem Shutdown event: Not Supported 00:12:57.339 Zone Descriptor Change Notices: Not Supported 00:12:57.339 Discovery Log Change Notices: Not Supported 00:12:57.339 Controller Attributes 00:12:57.339 128-bit Host Identifier: Not Supported 00:12:57.339 Non-Operational Permissive Mode: Not Supported 00:12:57.339 NVM Sets: Not Supported 00:12:57.339 Read Recovery Levels: Not Supported 00:12:57.339 Endurance Groups: Not Supported 00:12:57.339 Predictable Latency Mode: Not Supported 00:12:57.339 Traffic Based Keep ALive: Not Supported 00:12:57.339 Namespace Granularity: Not Supported 00:12:57.339 SQ Associations: Not Supported 00:12:57.339 UUID List: Not Supported 00:12:57.339 Multi-Domain Subsystem: Not Supported 00:12:57.339 Fixed Capacity Management: Not Supported 00:12:57.339 Variable Capacity Management: Not Supported 00:12:57.339 Delete Endurance Group: Not Supported 00:12:57.339 Delete NVM Set: Not Supported 00:12:57.339 Extended LBA Formats Supported: Supported 00:12:57.339 Flexible Data Placement Supported: Not Supported 00:12:57.339 00:12:57.339 Controller Memory Buffer Support 00:12:57.339 ================================ 00:12:57.339 Supported: No 00:12:57.339 00:12:57.339 Persistent Memory Region Support 00:12:57.339 ================================ 00:12:57.339 Supported: No 00:12:57.339 00:12:57.339 Admin Command Set Attributes 00:12:57.339 ============================ 00:12:57.339 Security Send/Receive: Not Supported 00:12:57.339 Format NVM: Supported 00:12:57.339 Firmware Activate/Download: Not Supported 00:12:57.339 Namespace Management: Supported 00:12:57.339 Device Self-Test: Not Supported 00:12:57.339 Directives: Supported 00:12:57.339 NVMe-MI: Not Supported 00:12:57.339 Virtualization Management: Not Supported 00:12:57.339 Doorbell Buffer Config: Supported 00:12:57.339 Get LBA Status Capability: Not Supported 00:12:57.339 Command & Feature Lockdown Capability: Not Supported 00:12:57.339 Abort Command Limit: 4 00:12:57.339 Async Event Request Limit: 4 00:12:57.339 Number of Firmware Slots: N/A 00:12:57.339 Firmware Slot 1 Read-Only: N/A 00:12:57.339 Firmware Activation Without Reset: N/A 00:12:57.339 Multiple Update Detection Support: N/A 00:12:57.339 Firmware Update Granularity: No Information Provided 00:12:57.339 Per-Namespace SMART Log: Yes 00:12:57.339 Asymmetric Namespace Access Log Page: Not Supported 00:12:57.339 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:12:57.339 Command Effects Log Page: Supported 00:12:57.339 Get Log Page Extended Data: Supported 00:12:57.339 Telemetry Log Pages: Not Supported 00:12:57.339 Persistent Event Log Pages: Not Supported 00:12:57.339 Supported Log Pages Log Page: May Support 00:12:57.339 Commands Supported & Effects Log Page: Not Supported 00:12:57.339 Feature Identifiers & Effects Log Page:May Support 00:12:57.339 NVMe-MI Commands & Effects Log Page: May Support 00:12:57.339 Data Area 4 for Telemetry Log: Not Supported 00:12:57.340 Error Log Page Entries Supported: 1 00:12:57.340 Keep Alive: Not Supported 00:12:57.340 00:12:57.340 NVM Command Set Attributes 00:12:57.340 ========================== 00:12:57.340 Submission Queue Entry Size 00:12:57.340 Max: 64 00:12:57.340 Min: 64 00:12:57.340 Completion Queue Entry Size 00:12:57.340 Max: 16 00:12:57.340 Min: 16 00:12:57.340 Number of Namespaces: 256 00:12:57.340 Compare Command: Supported 00:12:57.340 Write Uncorrectable Command: Not Supported 00:12:57.340 Dataset Management Command: Supported 00:12:57.340 Write Zeroes Command: Supported 00:12:57.340 Set Features Save Field: Supported 00:12:57.340 Reservations: Not Supported 00:12:57.340 Timestamp: Supported 00:12:57.340 Copy: Supported 00:12:57.340 Volatile Write Cache: Present 00:12:57.340 Atomic Write Unit (Normal): 1 00:12:57.340 Atomic Write Unit (PFail): 1 00:12:57.340 Atomic Compare & Write Unit: 1 00:12:57.340 Fused Compare & Write: Not Supported 00:12:57.340 Scatter-Gather List 00:12:57.340 SGL Command Set: Supported 00:12:57.340 SGL Keyed: Not Supported 00:12:57.340 SGL Bit Bucket Descriptor: Not Supported 00:12:57.340 SGL Metadata Pointer: Not Supported 00:12:57.340 Oversized SGL: Not Supported 00:12:57.340 SGL Metadata Address: Not Supported 00:12:57.340 SGL Offset: Not Supported 00:12:57.340 Transport SGL Data Block: Not Supported 00:12:57.340 Replay Protected Memory Block: Not Supported 00:12:57.340 00:12:57.340 Firmware Slot Information 00:12:57.340 ========================= 00:12:57.340 Active slot: 1 00:12:57.340 Slot 1 Firmware Revision: 1.0 00:12:57.340 00:12:57.340 00:12:57.340 Commands Supported and Effects 00:12:57.340 ============================== 00:12:57.340 Admin Commands 00:12:57.340 -------------- 00:12:57.340 Delete I/O Submission Queue (00h): Supported 00:12:57.340 Create I/O Submission Queue (01h): Supported 00:12:57.340 Get Log Page (02h): Supported 00:12:57.340 Delete I/O Completion Queue (04h): Supported 00:12:57.340 Create I/O Completion Queue (05h): Supported 00:12:57.340 Identify (06h): Supported 00:12:57.340 Abort (08h): Supported 00:12:57.340 Set Features (09h): Supported 00:12:57.340 Get Features (0Ah): Supported 00:12:57.340 Asynchronous Event Request (0Ch): Supported 00:12:57.340 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:57.340 Directive Send (19h): Supported 00:12:57.340 Directive Receive (1Ah): Supported 00:12:57.340 Virtualization Management (1Ch): Supported 00:12:57.340 Doorbell Buffer Config (7Ch): Supported 00:12:57.340 Format NVM (80h): Supported LBA-Change 00:12:57.340 I/O Commands 00:12:57.340 ------------ 00:12:57.340 Flush (00h): Supported LBA-Change 00:12:57.340 Write (01h): Supported LBA-Change 00:12:57.340 Read (02h): Supported 00:12:57.340 Compare (05h): Supported 00:12:57.340 Write Zeroes (08h): Supported LBA-Change 00:12:57.340 Dataset Management (09h): Supported LBA-Change 00:12:57.340 Unknown (0Ch): Supported 00:12:57.340 Unknown (12h): Supported 00:12:57.340 Copy (19h): Supported LBA-Change 00:12:57.340 Unknown (1Dh): Supported LBA-Change 00:12:57.340 00:12:57.340 Error Log 00:12:57.340 ========= 00:12:57.340 00:12:57.340 Arbitration 00:12:57.340 =========== 00:12:57.340 Arbitration Burst: no limit 00:12:57.340 00:12:57.340 Power Management 00:12:57.340 ================ 00:12:57.340 Number of Power States: 1 00:12:57.340 Current Power State: Power State #0 00:12:57.340 Power State #0: 00:12:57.340 Max Power: 25.00 W 00:12:57.340 Non-Operational State: Operational 00:12:57.340 Entry Latency: 16 microseconds 00:12:57.340 Exit Latency: 4 microseconds 00:12:57.340 Relative Read Throughput: 0 00:12:57.340 Relative Read Latency: 0 00:12:57.340 Relative Write Throughput: 0 00:12:57.340 Relative Write Latency: 0 00:12:57.340 Idle Power: Not Reported 00:12:57.340 Active Power: Not Reported 00:12:57.340 Non-Operational Permissive Mode: Not Supported 00:12:57.340 00:12:57.340 Health Information 00:12:57.340 ================== 00:12:57.340 Critical Warnings: 00:12:57.340 Available Spare Space: OK 00:12:57.340 Temperature: OK 00:12:57.340 Device Reliability: OK 00:12:57.340 Read Only: No 00:12:57.340 Volatile Memory Backup: OK 00:12:57.340 Current Temperature: 323 Kelvin (50 Celsius) 00:12:57.340 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:57.340 Available Spare: 0% 00:12:57.340 Available Spare Threshold: 0% 00:12:57.340 Life Percentage Used: 0% 00:12:57.340 Data Units Read: 981 00:12:57.340 Data Units Written: 819 00:12:57.340 Host Read Commands: 46937 00:12:57.340 Host Write Commands: 45530 00:12:57.340 Controller Busy Time: 0 minutes 00:12:57.340 Power Cycles: 0 00:12:57.340 Power On Hours: 0 hours 00:12:57.340 Unsafe Shutdowns: 0 00:12:57.340 Unrecoverable Media Errors: 0 00:12:57.340 Lifetime Error Log Entries: 0 00:12:57.340 Warning Temperature Time: 0 minutes 00:12:57.340 Critical Temperature Time: 0 minutes 00:12:57.340 00:12:57.340 Number of Queues 00:12:57.340 ================ 00:12:57.340 Number of I/O Submission Queues: 64 00:12:57.340 Number of I/O Completion Queues: 64 00:12:57.340 00:12:57.340 ZNS Specific Controller Data 00:12:57.340 ============================ 00:12:57.340 Zone Append Size Limit: 0 00:12:57.340 00:12:57.340 00:12:57.340 Active Namespaces 00:12:57.340 ================= 00:12:57.340 Namespace ID:1 00:12:57.340 Error Recovery Timeout: Unlimited 00:12:57.340 Command Set Identifier: NVM (00h) 00:12:57.340 Deallocate: Supported 00:12:57.340 Deallocated/Unwritten Error: Supported 00:12:57.340 Deallocated Read Value: All 0x00 00:12:57.340 Deallocate in Write Zeroes: Not Supported 00:12:57.340 Deallocated Guard Field: 0xFFFF 00:12:57.340 Flush: Supported 00:12:57.340 Reservation: Not Supported 00:12:57.340 Metadata Transferred as: Separate Metadata Buffer 00:12:57.340 Namespace Sharing Capabilities: Private 00:12:57.340 Size (in LBAs): 1548666 (5GiB) 00:12:57.340 Capacity (in LBAs): 1548666 (5GiB) 00:12:57.340 Utilization (in LBAs): 1548666 (5GiB) 00:12:57.340 Thin Provisioning: Not Supported 00:12:57.340 Per-NS Atomic Units: No 00:12:57.340 Maximum Single Source Range Length: 128 00:12:57.340 Maximum Copy Length: 128 00:12:57.340 Maximum Source Range Count: 128 00:12:57.340 NGUID/EUI64 Never Reused: No 00:12:57.340 Namespace Write Protected: No 00:12:57.340 Number of LBA Formats: 8 00:12:57.340 Current LBA Format: LBA Format #07 00:12:57.340 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:57.340 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:57.340 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:57.340 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:57.340 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:57.340 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:57.340 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:57.340 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:57.340 00:12:57.340 NVM Specific Namespace Data 00:12:57.340 =========================== 00:12:57.340 Logical Block Storage Tag Mask: 0 00:12:57.340 Protection Information Capabilities: 00:12:57.340 16b Guard Protection Information Storage Tag Support: No 00:12:57.340 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:57.340 Storage Tag Check Read Support: No 00:12:57.340 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.340 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.340 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.340 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.340 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.340 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.340 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.340 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.340 13:54:21 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:57.340 13:54:21 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:12:57.599 ===================================================== 00:12:57.599 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:57.599 ===================================================== 00:12:57.599 Controller Capabilities/Features 00:12:57.599 ================================ 00:12:57.599 Vendor ID: 1b36 00:12:57.599 Subsystem Vendor ID: 1af4 00:12:57.599 Serial Number: 12341 00:12:57.599 Model Number: QEMU NVMe Ctrl 00:12:57.599 Firmware Version: 8.0.0 00:12:57.599 Recommended Arb Burst: 6 00:12:57.599 IEEE OUI Identifier: 00 54 52 00:12:57.600 Multi-path I/O 00:12:57.600 May have multiple subsystem ports: No 00:12:57.600 May have multiple controllers: No 00:12:57.600 Associated with SR-IOV VF: No 00:12:57.600 Max Data Transfer Size: 524288 00:12:57.600 Max Number of Namespaces: 256 00:12:57.600 Max Number of I/O Queues: 64 00:12:57.600 NVMe Specification Version (VS): 1.4 00:12:57.600 NVMe Specification Version (Identify): 1.4 00:12:57.600 Maximum Queue Entries: 2048 00:12:57.600 Contiguous Queues Required: Yes 00:12:57.600 Arbitration Mechanisms Supported 00:12:57.600 Weighted Round Robin: Not Supported 00:12:57.600 Vendor Specific: Not Supported 00:12:57.600 Reset Timeout: 7500 ms 00:12:57.600 Doorbell Stride: 4 bytes 00:12:57.600 NVM Subsystem Reset: Not Supported 00:12:57.600 Command Sets Supported 00:12:57.600 NVM Command Set: Supported 00:12:57.600 Boot Partition: Not Supported 00:12:57.600 Memory Page Size Minimum: 4096 bytes 00:12:57.600 Memory Page Size Maximum: 65536 bytes 00:12:57.600 Persistent Memory Region: Not Supported 00:12:57.600 Optional Asynchronous Events Supported 00:12:57.600 Namespace Attribute Notices: Supported 00:12:57.600 Firmware Activation Notices: Not Supported 00:12:57.600 ANA Change Notices: Not Supported 00:12:57.600 PLE Aggregate Log Change Notices: Not Supported 00:12:57.600 LBA Status Info Alert Notices: Not Supported 00:12:57.600 EGE Aggregate Log Change Notices: Not Supported 00:12:57.600 Normal NVM Subsystem Shutdown event: Not Supported 00:12:57.600 Zone Descriptor Change Notices: Not Supported 00:12:57.600 Discovery Log Change Notices: Not Supported 00:12:57.600 Controller Attributes 00:12:57.600 128-bit Host Identifier: Not Supported 00:12:57.600 Non-Operational Permissive Mode: Not Supported 00:12:57.600 NVM Sets: Not Supported 00:12:57.600 Read Recovery Levels: Not Supported 00:12:57.600 Endurance Groups: Not Supported 00:12:57.600 Predictable Latency Mode: Not Supported 00:12:57.600 Traffic Based Keep ALive: Not Supported 00:12:57.600 Namespace Granularity: Not Supported 00:12:57.600 SQ Associations: Not Supported 00:12:57.600 UUID List: Not Supported 00:12:57.600 Multi-Domain Subsystem: Not Supported 00:12:57.600 Fixed Capacity Management: Not Supported 00:12:57.600 Variable Capacity Management: Not Supported 00:12:57.600 Delete Endurance Group: Not Supported 00:12:57.600 Delete NVM Set: Not Supported 00:12:57.600 Extended LBA Formats Supported: Supported 00:12:57.600 Flexible Data Placement Supported: Not Supported 00:12:57.600 00:12:57.600 Controller Memory Buffer Support 00:12:57.600 ================================ 00:12:57.600 Supported: No 00:12:57.600 00:12:57.600 Persistent Memory Region Support 00:12:57.600 ================================ 00:12:57.600 Supported: No 00:12:57.600 00:12:57.600 Admin Command Set Attributes 00:12:57.600 ============================ 00:12:57.600 Security Send/Receive: Not Supported 00:12:57.600 Format NVM: Supported 00:12:57.600 Firmware Activate/Download: Not Supported 00:12:57.600 Namespace Management: Supported 00:12:57.600 Device Self-Test: Not Supported 00:12:57.600 Directives: Supported 00:12:57.600 NVMe-MI: Not Supported 00:12:57.600 Virtualization Management: Not Supported 00:12:57.600 Doorbell Buffer Config: Supported 00:12:57.600 Get LBA Status Capability: Not Supported 00:12:57.600 Command & Feature Lockdown Capability: Not Supported 00:12:57.600 Abort Command Limit: 4 00:12:57.600 Async Event Request Limit: 4 00:12:57.600 Number of Firmware Slots: N/A 00:12:57.600 Firmware Slot 1 Read-Only: N/A 00:12:57.600 Firmware Activation Without Reset: N/A 00:12:57.600 Multiple Update Detection Support: N/A 00:12:57.600 Firmware Update Granularity: No Information Provided 00:12:57.600 Per-Namespace SMART Log: Yes 00:12:57.600 Asymmetric Namespace Access Log Page: Not Supported 00:12:57.600 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:12:57.600 Command Effects Log Page: Supported 00:12:57.600 Get Log Page Extended Data: Supported 00:12:57.600 Telemetry Log Pages: Not Supported 00:12:57.600 Persistent Event Log Pages: Not Supported 00:12:57.600 Supported Log Pages Log Page: May Support 00:12:57.600 Commands Supported & Effects Log Page: Not Supported 00:12:57.600 Feature Identifiers & Effects Log Page:May Support 00:12:57.600 NVMe-MI Commands & Effects Log Page: May Support 00:12:57.600 Data Area 4 for Telemetry Log: Not Supported 00:12:57.600 Error Log Page Entries Supported: 1 00:12:57.600 Keep Alive: Not Supported 00:12:57.600 00:12:57.600 NVM Command Set Attributes 00:12:57.600 ========================== 00:12:57.600 Submission Queue Entry Size 00:12:57.600 Max: 64 00:12:57.600 Min: 64 00:12:57.600 Completion Queue Entry Size 00:12:57.600 Max: 16 00:12:57.600 Min: 16 00:12:57.600 Number of Namespaces: 256 00:12:57.600 Compare Command: Supported 00:12:57.600 Write Uncorrectable Command: Not Supported 00:12:57.600 Dataset Management Command: Supported 00:12:57.600 Write Zeroes Command: Supported 00:12:57.600 Set Features Save Field: Supported 00:12:57.600 Reservations: Not Supported 00:12:57.600 Timestamp: Supported 00:12:57.600 Copy: Supported 00:12:57.600 Volatile Write Cache: Present 00:12:57.600 Atomic Write Unit (Normal): 1 00:12:57.600 Atomic Write Unit (PFail): 1 00:12:57.600 Atomic Compare & Write Unit: 1 00:12:57.600 Fused Compare & Write: Not Supported 00:12:57.600 Scatter-Gather List 00:12:57.600 SGL Command Set: Supported 00:12:57.600 SGL Keyed: Not Supported 00:12:57.600 SGL Bit Bucket Descriptor: Not Supported 00:12:57.600 SGL Metadata Pointer: Not Supported 00:12:57.600 Oversized SGL: Not Supported 00:12:57.600 SGL Metadata Address: Not Supported 00:12:57.600 SGL Offset: Not Supported 00:12:57.600 Transport SGL Data Block: Not Supported 00:12:57.600 Replay Protected Memory Block: Not Supported 00:12:57.600 00:12:57.600 Firmware Slot Information 00:12:57.600 ========================= 00:12:57.600 Active slot: 1 00:12:57.600 Slot 1 Firmware Revision: 1.0 00:12:57.600 00:12:57.600 00:12:57.600 Commands Supported and Effects 00:12:57.600 ============================== 00:12:57.600 Admin Commands 00:12:57.600 -------------- 00:12:57.600 Delete I/O Submission Queue (00h): Supported 00:12:57.600 Create I/O Submission Queue (01h): Supported 00:12:57.600 Get Log Page (02h): Supported 00:12:57.600 Delete I/O Completion Queue (04h): Supported 00:12:57.600 Create I/O Completion Queue (05h): Supported 00:12:57.600 Identify (06h): Supported 00:12:57.600 Abort (08h): Supported 00:12:57.600 Set Features (09h): Supported 00:12:57.600 Get Features (0Ah): Supported 00:12:57.600 Asynchronous Event Request (0Ch): Supported 00:12:57.600 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:57.600 Directive Send (19h): Supported 00:12:57.600 Directive Receive (1Ah): Supported 00:12:57.600 Virtualization Management (1Ch): Supported 00:12:57.600 Doorbell Buffer Config (7Ch): Supported 00:12:57.600 Format NVM (80h): Supported LBA-Change 00:12:57.600 I/O Commands 00:12:57.600 ------------ 00:12:57.600 Flush (00h): Supported LBA-Change 00:12:57.600 Write (01h): Supported LBA-Change 00:12:57.600 Read (02h): Supported 00:12:57.600 Compare (05h): Supported 00:12:57.600 Write Zeroes (08h): Supported LBA-Change 00:12:57.600 Dataset Management (09h): Supported LBA-Change 00:12:57.600 Unknown (0Ch): Supported 00:12:57.600 Unknown (12h): Supported 00:12:57.600 Copy (19h): Supported LBA-Change 00:12:57.600 Unknown (1Dh): Supported LBA-Change 00:12:57.600 00:12:57.600 Error Log 00:12:57.600 ========= 00:12:57.600 00:12:57.600 Arbitration 00:12:57.600 =========== 00:12:57.600 Arbitration Burst: no limit 00:12:57.600 00:12:57.600 Power Management 00:12:57.600 ================ 00:12:57.600 Number of Power States: 1 00:12:57.600 Current Power State: Power State #0 00:12:57.600 Power State #0: 00:12:57.600 Max Power: 25.00 W 00:12:57.600 Non-Operational State: Operational 00:12:57.600 Entry Latency: 16 microseconds 00:12:57.600 Exit Latency: 4 microseconds 00:12:57.600 Relative Read Throughput: 0 00:12:57.600 Relative Read Latency: 0 00:12:57.600 Relative Write Throughput: 0 00:12:57.600 Relative Write Latency: 0 00:12:57.600 Idle Power: Not Reported 00:12:57.600 Active Power: Not Reported 00:12:57.600 Non-Operational Permissive Mode: Not Supported 00:12:57.600 00:12:57.600 Health Information 00:12:57.600 ================== 00:12:57.600 Critical Warnings: 00:12:57.600 Available Spare Space: OK 00:12:57.600 Temperature: OK 00:12:57.600 Device Reliability: OK 00:12:57.600 Read Only: No 00:12:57.600 Volatile Memory Backup: OK 00:12:57.600 Current Temperature: 323 Kelvin (50 Celsius) 00:12:57.600 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:57.600 Available Spare: 0% 00:12:57.600 Available Spare Threshold: 0% 00:12:57.600 Life Percentage Used: 0% 00:12:57.600 Data Units Read: 726 00:12:57.600 Data Units Written: 571 00:12:57.600 Host Read Commands: 33507 00:12:57.601 Host Write Commands: 31153 00:12:57.601 Controller Busy Time: 0 minutes 00:12:57.601 Power Cycles: 0 00:12:57.601 Power On Hours: 0 hours 00:12:57.601 Unsafe Shutdowns: 0 00:12:57.601 Unrecoverable Media Errors: 0 00:12:57.601 Lifetime Error Log Entries: 0 00:12:57.601 Warning Temperature Time: 0 minutes 00:12:57.601 Critical Temperature Time: 0 minutes 00:12:57.601 00:12:57.601 Number of Queues 00:12:57.601 ================ 00:12:57.601 Number of I/O Submission Queues: 64 00:12:57.601 Number of I/O Completion Queues: 64 00:12:57.601 00:12:57.601 ZNS Specific Controller Data 00:12:57.601 ============================ 00:12:57.601 Zone Append Size Limit: 0 00:12:57.601 00:12:57.601 00:12:57.601 Active Namespaces 00:12:57.601 ================= 00:12:57.601 Namespace ID:1 00:12:57.601 Error Recovery Timeout: Unlimited 00:12:57.601 Command Set Identifier: NVM (00h) 00:12:57.601 Deallocate: Supported 00:12:57.601 Deallocated/Unwritten Error: Supported 00:12:57.601 Deallocated Read Value: All 0x00 00:12:57.601 Deallocate in Write Zeroes: Not Supported 00:12:57.601 Deallocated Guard Field: 0xFFFF 00:12:57.601 Flush: Supported 00:12:57.601 Reservation: Not Supported 00:12:57.601 Namespace Sharing Capabilities: Private 00:12:57.601 Size (in LBAs): 1310720 (5GiB) 00:12:57.601 Capacity (in LBAs): 1310720 (5GiB) 00:12:57.601 Utilization (in LBAs): 1310720 (5GiB) 00:12:57.601 Thin Provisioning: Not Supported 00:12:57.601 Per-NS Atomic Units: No 00:12:57.601 Maximum Single Source Range Length: 128 00:12:57.601 Maximum Copy Length: 128 00:12:57.601 Maximum Source Range Count: 128 00:12:57.601 NGUID/EUI64 Never Reused: No 00:12:57.601 Namespace Write Protected: No 00:12:57.601 Number of LBA Formats: 8 00:12:57.601 Current LBA Format: LBA Format #04 00:12:57.601 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:57.601 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:57.601 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:57.601 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:57.601 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:57.601 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:57.601 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:57.601 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:57.601 00:12:57.601 NVM Specific Namespace Data 00:12:57.601 =========================== 00:12:57.601 Logical Block Storage Tag Mask: 0 00:12:57.601 Protection Information Capabilities: 00:12:57.601 16b Guard Protection Information Storage Tag Support: No 00:12:57.601 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:57.601 Storage Tag Check Read Support: No 00:12:57.601 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.601 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.601 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.601 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.601 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.601 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.601 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.601 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.601 13:54:22 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:57.601 13:54:22 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:12:57.860 ===================================================== 00:12:57.860 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:57.860 ===================================================== 00:12:57.860 Controller Capabilities/Features 00:12:57.860 ================================ 00:12:57.860 Vendor ID: 1b36 00:12:57.860 Subsystem Vendor ID: 1af4 00:12:57.860 Serial Number: 12342 00:12:57.861 Model Number: QEMU NVMe Ctrl 00:12:57.861 Firmware Version: 8.0.0 00:12:57.861 Recommended Arb Burst: 6 00:12:57.861 IEEE OUI Identifier: 00 54 52 00:12:57.861 Multi-path I/O 00:12:57.861 May have multiple subsystem ports: No 00:12:57.861 May have multiple controllers: No 00:12:57.861 Associated with SR-IOV VF: No 00:12:57.861 Max Data Transfer Size: 524288 00:12:57.861 Max Number of Namespaces: 256 00:12:57.861 Max Number of I/O Queues: 64 00:12:57.861 NVMe Specification Version (VS): 1.4 00:12:57.861 NVMe Specification Version (Identify): 1.4 00:12:57.861 Maximum Queue Entries: 2048 00:12:57.861 Contiguous Queues Required: Yes 00:12:57.861 Arbitration Mechanisms Supported 00:12:57.861 Weighted Round Robin: Not Supported 00:12:57.861 Vendor Specific: Not Supported 00:12:57.861 Reset Timeout: 7500 ms 00:12:57.861 Doorbell Stride: 4 bytes 00:12:57.861 NVM Subsystem Reset: Not Supported 00:12:57.861 Command Sets Supported 00:12:57.861 NVM Command Set: Supported 00:12:57.861 Boot Partition: Not Supported 00:12:57.861 Memory Page Size Minimum: 4096 bytes 00:12:57.861 Memory Page Size Maximum: 65536 bytes 00:12:57.861 Persistent Memory Region: Not Supported 00:12:57.861 Optional Asynchronous Events Supported 00:12:57.861 Namespace Attribute Notices: Supported 00:12:57.861 Firmware Activation Notices: Not Supported 00:12:57.861 ANA Change Notices: Not Supported 00:12:57.861 PLE Aggregate Log Change Notices: Not Supported 00:12:57.861 LBA Status Info Alert Notices: Not Supported 00:12:57.861 EGE Aggregate Log Change Notices: Not Supported 00:12:57.861 Normal NVM Subsystem Shutdown event: Not Supported 00:12:57.861 Zone Descriptor Change Notices: Not Supported 00:12:57.861 Discovery Log Change Notices: Not Supported 00:12:57.861 Controller Attributes 00:12:57.861 128-bit Host Identifier: Not Supported 00:12:57.861 Non-Operational Permissive Mode: Not Supported 00:12:57.861 NVM Sets: Not Supported 00:12:57.861 Read Recovery Levels: Not Supported 00:12:57.861 Endurance Groups: Not Supported 00:12:57.861 Predictable Latency Mode: Not Supported 00:12:57.861 Traffic Based Keep ALive: Not Supported 00:12:57.861 Namespace Granularity: Not Supported 00:12:57.861 SQ Associations: Not Supported 00:12:57.861 UUID List: Not Supported 00:12:57.861 Multi-Domain Subsystem: Not Supported 00:12:57.861 Fixed Capacity Management: Not Supported 00:12:57.861 Variable Capacity Management: Not Supported 00:12:57.861 Delete Endurance Group: Not Supported 00:12:57.861 Delete NVM Set: Not Supported 00:12:57.861 Extended LBA Formats Supported: Supported 00:12:57.861 Flexible Data Placement Supported: Not Supported 00:12:57.861 00:12:57.861 Controller Memory Buffer Support 00:12:57.861 ================================ 00:12:57.861 Supported: No 00:12:57.861 00:12:57.861 Persistent Memory Region Support 00:12:57.861 ================================ 00:12:57.861 Supported: No 00:12:57.861 00:12:57.861 Admin Command Set Attributes 00:12:57.861 ============================ 00:12:57.861 Security Send/Receive: Not Supported 00:12:57.861 Format NVM: Supported 00:12:57.861 Firmware Activate/Download: Not Supported 00:12:57.861 Namespace Management: Supported 00:12:57.861 Device Self-Test: Not Supported 00:12:57.861 Directives: Supported 00:12:57.861 NVMe-MI: Not Supported 00:12:57.861 Virtualization Management: Not Supported 00:12:57.861 Doorbell Buffer Config: Supported 00:12:57.861 Get LBA Status Capability: Not Supported 00:12:57.861 Command & Feature Lockdown Capability: Not Supported 00:12:57.861 Abort Command Limit: 4 00:12:57.861 Async Event Request Limit: 4 00:12:57.861 Number of Firmware Slots: N/A 00:12:57.861 Firmware Slot 1 Read-Only: N/A 00:12:57.861 Firmware Activation Without Reset: N/A 00:12:57.861 Multiple Update Detection Support: N/A 00:12:57.861 Firmware Update Granularity: No Information Provided 00:12:57.861 Per-Namespace SMART Log: Yes 00:12:57.861 Asymmetric Namespace Access Log Page: Not Supported 00:12:57.861 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:12:57.861 Command Effects Log Page: Supported 00:12:57.861 Get Log Page Extended Data: Supported 00:12:57.861 Telemetry Log Pages: Not Supported 00:12:57.861 Persistent Event Log Pages: Not Supported 00:12:57.861 Supported Log Pages Log Page: May Support 00:12:57.861 Commands Supported & Effects Log Page: Not Supported 00:12:57.861 Feature Identifiers & Effects Log Page:May Support 00:12:57.861 NVMe-MI Commands & Effects Log Page: May Support 00:12:57.861 Data Area 4 for Telemetry Log: Not Supported 00:12:57.861 Error Log Page Entries Supported: 1 00:12:57.861 Keep Alive: Not Supported 00:12:57.861 00:12:57.861 NVM Command Set Attributes 00:12:57.861 ========================== 00:12:57.861 Submission Queue Entry Size 00:12:57.861 Max: 64 00:12:57.861 Min: 64 00:12:57.861 Completion Queue Entry Size 00:12:57.861 Max: 16 00:12:57.861 Min: 16 00:12:57.861 Number of Namespaces: 256 00:12:57.861 Compare Command: Supported 00:12:57.861 Write Uncorrectable Command: Not Supported 00:12:57.861 Dataset Management Command: Supported 00:12:57.861 Write Zeroes Command: Supported 00:12:57.861 Set Features Save Field: Supported 00:12:57.861 Reservations: Not Supported 00:12:57.861 Timestamp: Supported 00:12:57.861 Copy: Supported 00:12:57.861 Volatile Write Cache: Present 00:12:57.861 Atomic Write Unit (Normal): 1 00:12:57.861 Atomic Write Unit (PFail): 1 00:12:57.861 Atomic Compare & Write Unit: 1 00:12:57.861 Fused Compare & Write: Not Supported 00:12:57.861 Scatter-Gather List 00:12:57.861 SGL Command Set: Supported 00:12:57.861 SGL Keyed: Not Supported 00:12:57.861 SGL Bit Bucket Descriptor: Not Supported 00:12:57.861 SGL Metadata Pointer: Not Supported 00:12:57.861 Oversized SGL: Not Supported 00:12:57.861 SGL Metadata Address: Not Supported 00:12:57.861 SGL Offset: Not Supported 00:12:57.861 Transport SGL Data Block: Not Supported 00:12:57.861 Replay Protected Memory Block: Not Supported 00:12:57.861 00:12:57.861 Firmware Slot Information 00:12:57.861 ========================= 00:12:57.861 Active slot: 1 00:12:57.861 Slot 1 Firmware Revision: 1.0 00:12:57.861 00:12:57.861 00:12:57.861 Commands Supported and Effects 00:12:57.861 ============================== 00:12:57.861 Admin Commands 00:12:57.861 -------------- 00:12:57.861 Delete I/O Submission Queue (00h): Supported 00:12:57.861 Create I/O Submission Queue (01h): Supported 00:12:57.861 Get Log Page (02h): Supported 00:12:57.861 Delete I/O Completion Queue (04h): Supported 00:12:57.861 Create I/O Completion Queue (05h): Supported 00:12:57.861 Identify (06h): Supported 00:12:57.861 Abort (08h): Supported 00:12:57.861 Set Features (09h): Supported 00:12:57.861 Get Features (0Ah): Supported 00:12:57.861 Asynchronous Event Request (0Ch): Supported 00:12:57.861 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:57.861 Directive Send (19h): Supported 00:12:57.861 Directive Receive (1Ah): Supported 00:12:57.861 Virtualization Management (1Ch): Supported 00:12:57.861 Doorbell Buffer Config (7Ch): Supported 00:12:57.861 Format NVM (80h): Supported LBA-Change 00:12:57.861 I/O Commands 00:12:57.861 ------------ 00:12:57.861 Flush (00h): Supported LBA-Change 00:12:57.861 Write (01h): Supported LBA-Change 00:12:57.861 Read (02h): Supported 00:12:57.861 Compare (05h): Supported 00:12:57.861 Write Zeroes (08h): Supported LBA-Change 00:12:57.861 Dataset Management (09h): Supported LBA-Change 00:12:57.861 Unknown (0Ch): Supported 00:12:57.861 Unknown (12h): Supported 00:12:57.861 Copy (19h): Supported LBA-Change 00:12:57.861 Unknown (1Dh): Supported LBA-Change 00:12:57.861 00:12:57.861 Error Log 00:12:57.861 ========= 00:12:57.861 00:12:57.861 Arbitration 00:12:57.861 =========== 00:12:57.862 Arbitration Burst: no limit 00:12:57.862 00:12:57.862 Power Management 00:12:57.862 ================ 00:12:57.862 Number of Power States: 1 00:12:57.862 Current Power State: Power State #0 00:12:57.862 Power State #0: 00:12:57.862 Max Power: 25.00 W 00:12:57.862 Non-Operational State: Operational 00:12:57.862 Entry Latency: 16 microseconds 00:12:57.862 Exit Latency: 4 microseconds 00:12:57.862 Relative Read Throughput: 0 00:12:57.862 Relative Read Latency: 0 00:12:57.862 Relative Write Throughput: 0 00:12:57.862 Relative Write Latency: 0 00:12:57.862 Idle Power: Not Reported 00:12:57.862 Active Power: Not Reported 00:12:57.862 Non-Operational Permissive Mode: Not Supported 00:12:57.862 00:12:57.862 Health Information 00:12:57.862 ================== 00:12:57.862 Critical Warnings: 00:12:57.862 Available Spare Space: OK 00:12:57.862 Temperature: OK 00:12:57.862 Device Reliability: OK 00:12:57.862 Read Only: No 00:12:57.862 Volatile Memory Backup: OK 00:12:57.862 Current Temperature: 323 Kelvin (50 Celsius) 00:12:57.862 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:57.862 Available Spare: 0% 00:12:57.862 Available Spare Threshold: 0% 00:12:57.862 Life Percentage Used: 0% 00:12:57.862 Data Units Read: 2079 00:12:57.862 Data Units Written: 1759 00:12:57.862 Host Read Commands: 98246 00:12:57.862 Host Write Commands: 94016 00:12:57.862 Controller Busy Time: 0 minutes 00:12:57.862 Power Cycles: 0 00:12:57.862 Power On Hours: 0 hours 00:12:57.862 Unsafe Shutdowns: 0 00:12:57.862 Unrecoverable Media Errors: 0 00:12:57.862 Lifetime Error Log Entries: 0 00:12:57.862 Warning Temperature Time: 0 minutes 00:12:57.862 Critical Temperature Time: 0 minutes 00:12:57.862 00:12:57.862 Number of Queues 00:12:57.862 ================ 00:12:57.862 Number of I/O Submission Queues: 64 00:12:57.862 Number of I/O Completion Queues: 64 00:12:57.862 00:12:57.862 ZNS Specific Controller Data 00:12:57.862 ============================ 00:12:57.862 Zone Append Size Limit: 0 00:12:57.862 00:12:57.862 00:12:57.862 Active Namespaces 00:12:57.862 ================= 00:12:57.862 Namespace ID:1 00:12:57.862 Error Recovery Timeout: Unlimited 00:12:57.862 Command Set Identifier: NVM (00h) 00:12:57.862 Deallocate: Supported 00:12:57.862 Deallocated/Unwritten Error: Supported 00:12:57.862 Deallocated Read Value: All 0x00 00:12:57.862 Deallocate in Write Zeroes: Not Supported 00:12:57.862 Deallocated Guard Field: 0xFFFF 00:12:57.862 Flush: Supported 00:12:57.862 Reservation: Not Supported 00:12:57.862 Namespace Sharing Capabilities: Private 00:12:57.862 Size (in LBAs): 1048576 (4GiB) 00:12:57.862 Capacity (in LBAs): 1048576 (4GiB) 00:12:57.862 Utilization (in LBAs): 1048576 (4GiB) 00:12:57.862 Thin Provisioning: Not Supported 00:12:57.862 Per-NS Atomic Units: No 00:12:57.862 Maximum Single Source Range Length: 128 00:12:57.862 Maximum Copy Length: 128 00:12:57.862 Maximum Source Range Count: 128 00:12:57.862 NGUID/EUI64 Never Reused: No 00:12:57.862 Namespace Write Protected: No 00:12:57.862 Number of LBA Formats: 8 00:12:57.862 Current LBA Format: LBA Format #04 00:12:57.862 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:57.862 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:57.862 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:57.862 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:57.862 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:57.862 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:57.862 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:57.862 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:57.862 00:12:57.862 NVM Specific Namespace Data 00:12:57.862 =========================== 00:12:57.862 Logical Block Storage Tag Mask: 0 00:12:57.862 Protection Information Capabilities: 00:12:57.862 16b Guard Protection Information Storage Tag Support: No 00:12:57.862 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:57.862 Storage Tag Check Read Support: No 00:12:57.862 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.862 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.862 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.862 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.862 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.862 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.862 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.862 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.862 Namespace ID:2 00:12:57.862 Error Recovery Timeout: Unlimited 00:12:57.862 Command Set Identifier: NVM (00h) 00:12:57.862 Deallocate: Supported 00:12:57.862 Deallocated/Unwritten Error: Supported 00:12:57.862 Deallocated Read Value: All 0x00 00:12:57.862 Deallocate in Write Zeroes: Not Supported 00:12:57.862 Deallocated Guard Field: 0xFFFF 00:12:57.862 Flush: Supported 00:12:57.862 Reservation: Not Supported 00:12:57.862 Namespace Sharing Capabilities: Private 00:12:57.862 Size (in LBAs): 1048576 (4GiB) 00:12:57.862 Capacity (in LBAs): 1048576 (4GiB) 00:12:57.862 Utilization (in LBAs): 1048576 (4GiB) 00:12:57.862 Thin Provisioning: Not Supported 00:12:57.862 Per-NS Atomic Units: No 00:12:57.862 Maximum Single Source Range Length: 128 00:12:57.862 Maximum Copy Length: 128 00:12:57.862 Maximum Source Range Count: 128 00:12:57.862 NGUID/EUI64 Never Reused: No 00:12:57.862 Namespace Write Protected: No 00:12:57.862 Number of LBA Formats: 8 00:12:57.862 Current LBA Format: LBA Format #04 00:12:57.862 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:57.862 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:57.862 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:57.862 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:57.862 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:57.862 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:57.862 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:57.862 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:57.862 00:12:57.862 NVM Specific Namespace Data 00:12:57.862 =========================== 00:12:57.862 Logical Block Storage Tag Mask: 0 00:12:57.862 Protection Information Capabilities: 00:12:57.862 16b Guard Protection Information Storage Tag Support: No 00:12:57.862 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:57.862 Storage Tag Check Read Support: No 00:12:57.862 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.862 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.862 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.862 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.862 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.862 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.862 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.862 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:57.862 Namespace ID:3 00:12:57.862 Error Recovery Timeout: Unlimited 00:12:57.862 Command Set Identifier: NVM (00h) 00:12:57.862 Deallocate: Supported 00:12:57.862 Deallocated/Unwritten Error: Supported 00:12:57.862 Deallocated Read Value: All 0x00 00:12:57.862 Deallocate in Write Zeroes: Not Supported 00:12:57.862 Deallocated Guard Field: 0xFFFF 00:12:57.862 Flush: Supported 00:12:57.862 Reservation: Not Supported 00:12:57.862 Namespace Sharing Capabilities: Private 00:12:57.862 Size (in LBAs): 1048576 (4GiB) 00:12:57.862 Capacity (in LBAs): 1048576 (4GiB) 00:12:57.862 Utilization (in LBAs): 1048576 (4GiB) 00:12:57.862 Thin Provisioning: Not Supported 00:12:57.862 Per-NS Atomic Units: No 00:12:57.862 Maximum Single Source Range Length: 128 00:12:57.862 Maximum Copy Length: 128 00:12:57.862 Maximum Source Range Count: 128 00:12:57.862 NGUID/EUI64 Never Reused: No 00:12:57.862 Namespace Write Protected: No 00:12:57.862 Number of LBA Formats: 8 00:12:57.862 Current LBA Format: LBA Format #04 00:12:57.862 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:57.862 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:57.862 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:57.862 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:57.862 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:57.862 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:57.862 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:57.862 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:57.862 00:12:57.862 NVM Specific Namespace Data 00:12:57.862 =========================== 00:12:57.862 Logical Block Storage Tag Mask: 0 00:12:57.862 Protection Information Capabilities: 00:12:57.862 16b Guard Protection Information Storage Tag Support: No 00:12:57.862 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:58.121 Storage Tag Check Read Support: No 00:12:58.121 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:58.121 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:58.121 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:58.121 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:58.121 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:58.121 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:58.121 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:58.121 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:58.121 13:54:22 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:58.121 13:54:22 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:12:58.380 ===================================================== 00:12:58.380 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:58.380 ===================================================== 00:12:58.380 Controller Capabilities/Features 00:12:58.380 ================================ 00:12:58.380 Vendor ID: 1b36 00:12:58.380 Subsystem Vendor ID: 1af4 00:12:58.380 Serial Number: 12343 00:12:58.380 Model Number: QEMU NVMe Ctrl 00:12:58.380 Firmware Version: 8.0.0 00:12:58.380 Recommended Arb Burst: 6 00:12:58.380 IEEE OUI Identifier: 00 54 52 00:12:58.380 Multi-path I/O 00:12:58.380 May have multiple subsystem ports: No 00:12:58.380 May have multiple controllers: Yes 00:12:58.380 Associated with SR-IOV VF: No 00:12:58.380 Max Data Transfer Size: 524288 00:12:58.380 Max Number of Namespaces: 256 00:12:58.380 Max Number of I/O Queues: 64 00:12:58.380 NVMe Specification Version (VS): 1.4 00:12:58.380 NVMe Specification Version (Identify): 1.4 00:12:58.380 Maximum Queue Entries: 2048 00:12:58.380 Contiguous Queues Required: Yes 00:12:58.380 Arbitration Mechanisms Supported 00:12:58.380 Weighted Round Robin: Not Supported 00:12:58.380 Vendor Specific: Not Supported 00:12:58.380 Reset Timeout: 7500 ms 00:12:58.380 Doorbell Stride: 4 bytes 00:12:58.380 NVM Subsystem Reset: Not Supported 00:12:58.380 Command Sets Supported 00:12:58.380 NVM Command Set: Supported 00:12:58.380 Boot Partition: Not Supported 00:12:58.380 Memory Page Size Minimum: 4096 bytes 00:12:58.380 Memory Page Size Maximum: 65536 bytes 00:12:58.380 Persistent Memory Region: Not Supported 00:12:58.380 Optional Asynchronous Events Supported 00:12:58.380 Namespace Attribute Notices: Supported 00:12:58.380 Firmware Activation Notices: Not Supported 00:12:58.380 ANA Change Notices: Not Supported 00:12:58.380 PLE Aggregate Log Change Notices: Not Supported 00:12:58.380 LBA Status Info Alert Notices: Not Supported 00:12:58.380 EGE Aggregate Log Change Notices: Not Supported 00:12:58.380 Normal NVM Subsystem Shutdown event: Not Supported 00:12:58.380 Zone Descriptor Change Notices: Not Supported 00:12:58.380 Discovery Log Change Notices: Not Supported 00:12:58.380 Controller Attributes 00:12:58.380 128-bit Host Identifier: Not Supported 00:12:58.380 Non-Operational Permissive Mode: Not Supported 00:12:58.380 NVM Sets: Not Supported 00:12:58.380 Read Recovery Levels: Not Supported 00:12:58.380 Endurance Groups: Supported 00:12:58.380 Predictable Latency Mode: Not Supported 00:12:58.380 Traffic Based Keep ALive: Not Supported 00:12:58.380 Namespace Granularity: Not Supported 00:12:58.380 SQ Associations: Not Supported 00:12:58.380 UUID List: Not Supported 00:12:58.380 Multi-Domain Subsystem: Not Supported 00:12:58.380 Fixed Capacity Management: Not Supported 00:12:58.380 Variable Capacity Management: Not Supported 00:12:58.380 Delete Endurance Group: Not Supported 00:12:58.380 Delete NVM Set: Not Supported 00:12:58.380 Extended LBA Formats Supported: Supported 00:12:58.380 Flexible Data Placement Supported: Supported 00:12:58.380 00:12:58.380 Controller Memory Buffer Support 00:12:58.380 ================================ 00:12:58.380 Supported: No 00:12:58.380 00:12:58.380 Persistent Memory Region Support 00:12:58.380 ================================ 00:12:58.380 Supported: No 00:12:58.380 00:12:58.380 Admin Command Set Attributes 00:12:58.380 ============================ 00:12:58.380 Security Send/Receive: Not Supported 00:12:58.380 Format NVM: Supported 00:12:58.380 Firmware Activate/Download: Not Supported 00:12:58.380 Namespace Management: Supported 00:12:58.380 Device Self-Test: Not Supported 00:12:58.380 Directives: Supported 00:12:58.380 NVMe-MI: Not Supported 00:12:58.380 Virtualization Management: Not Supported 00:12:58.380 Doorbell Buffer Config: Supported 00:12:58.380 Get LBA Status Capability: Not Supported 00:12:58.380 Command & Feature Lockdown Capability: Not Supported 00:12:58.380 Abort Command Limit: 4 00:12:58.380 Async Event Request Limit: 4 00:12:58.380 Number of Firmware Slots: N/A 00:12:58.380 Firmware Slot 1 Read-Only: N/A 00:12:58.381 Firmware Activation Without Reset: N/A 00:12:58.381 Multiple Update Detection Support: N/A 00:12:58.381 Firmware Update Granularity: No Information Provided 00:12:58.381 Per-Namespace SMART Log: Yes 00:12:58.381 Asymmetric Namespace Access Log Page: Not Supported 00:12:58.381 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:12:58.381 Command Effects Log Page: Supported 00:12:58.381 Get Log Page Extended Data: Supported 00:12:58.381 Telemetry Log Pages: Not Supported 00:12:58.381 Persistent Event Log Pages: Not Supported 00:12:58.381 Supported Log Pages Log Page: May Support 00:12:58.381 Commands Supported & Effects Log Page: Not Supported 00:12:58.381 Feature Identifiers & Effects Log Page:May Support 00:12:58.381 NVMe-MI Commands & Effects Log Page: May Support 00:12:58.381 Data Area 4 for Telemetry Log: Not Supported 00:12:58.381 Error Log Page Entries Supported: 1 00:12:58.381 Keep Alive: Not Supported 00:12:58.381 00:12:58.381 NVM Command Set Attributes 00:12:58.381 ========================== 00:12:58.381 Submission Queue Entry Size 00:12:58.381 Max: 64 00:12:58.381 Min: 64 00:12:58.381 Completion Queue Entry Size 00:12:58.381 Max: 16 00:12:58.381 Min: 16 00:12:58.381 Number of Namespaces: 256 00:12:58.381 Compare Command: Supported 00:12:58.381 Write Uncorrectable Command: Not Supported 00:12:58.381 Dataset Management Command: Supported 00:12:58.381 Write Zeroes Command: Supported 00:12:58.381 Set Features Save Field: Supported 00:12:58.381 Reservations: Not Supported 00:12:58.381 Timestamp: Supported 00:12:58.381 Copy: Supported 00:12:58.381 Volatile Write Cache: Present 00:12:58.381 Atomic Write Unit (Normal): 1 00:12:58.381 Atomic Write Unit (PFail): 1 00:12:58.381 Atomic Compare & Write Unit: 1 00:12:58.381 Fused Compare & Write: Not Supported 00:12:58.381 Scatter-Gather List 00:12:58.381 SGL Command Set: Supported 00:12:58.381 SGL Keyed: Not Supported 00:12:58.381 SGL Bit Bucket Descriptor: Not Supported 00:12:58.381 SGL Metadata Pointer: Not Supported 00:12:58.381 Oversized SGL: Not Supported 00:12:58.381 SGL Metadata Address: Not Supported 00:12:58.381 SGL Offset: Not Supported 00:12:58.381 Transport SGL Data Block: Not Supported 00:12:58.381 Replay Protected Memory Block: Not Supported 00:12:58.381 00:12:58.381 Firmware Slot Information 00:12:58.381 ========================= 00:12:58.381 Active slot: 1 00:12:58.381 Slot 1 Firmware Revision: 1.0 00:12:58.381 00:12:58.381 00:12:58.381 Commands Supported and Effects 00:12:58.381 ============================== 00:12:58.381 Admin Commands 00:12:58.381 -------------- 00:12:58.381 Delete I/O Submission Queue (00h): Supported 00:12:58.381 Create I/O Submission Queue (01h): Supported 00:12:58.381 Get Log Page (02h): Supported 00:12:58.381 Delete I/O Completion Queue (04h): Supported 00:12:58.381 Create I/O Completion Queue (05h): Supported 00:12:58.381 Identify (06h): Supported 00:12:58.381 Abort (08h): Supported 00:12:58.381 Set Features (09h): Supported 00:12:58.381 Get Features (0Ah): Supported 00:12:58.381 Asynchronous Event Request (0Ch): Supported 00:12:58.381 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:58.381 Directive Send (19h): Supported 00:12:58.381 Directive Receive (1Ah): Supported 00:12:58.381 Virtualization Management (1Ch): Supported 00:12:58.381 Doorbell Buffer Config (7Ch): Supported 00:12:58.381 Format NVM (80h): Supported LBA-Change 00:12:58.381 I/O Commands 00:12:58.381 ------------ 00:12:58.381 Flush (00h): Supported LBA-Change 00:12:58.381 Write (01h): Supported LBA-Change 00:12:58.381 Read (02h): Supported 00:12:58.381 Compare (05h): Supported 00:12:58.381 Write Zeroes (08h): Supported LBA-Change 00:12:58.381 Dataset Management (09h): Supported LBA-Change 00:12:58.381 Unknown (0Ch): Supported 00:12:58.381 Unknown (12h): Supported 00:12:58.381 Copy (19h): Supported LBA-Change 00:12:58.381 Unknown (1Dh): Supported LBA-Change 00:12:58.381 00:12:58.381 Error Log 00:12:58.381 ========= 00:12:58.381 00:12:58.381 Arbitration 00:12:58.381 =========== 00:12:58.381 Arbitration Burst: no limit 00:12:58.381 00:12:58.381 Power Management 00:12:58.381 ================ 00:12:58.381 Number of Power States: 1 00:12:58.381 Current Power State: Power State #0 00:12:58.381 Power State #0: 00:12:58.381 Max Power: 25.00 W 00:12:58.381 Non-Operational State: Operational 00:12:58.381 Entry Latency: 16 microseconds 00:12:58.381 Exit Latency: 4 microseconds 00:12:58.381 Relative Read Throughput: 0 00:12:58.381 Relative Read Latency: 0 00:12:58.381 Relative Write Throughput: 0 00:12:58.381 Relative Write Latency: 0 00:12:58.381 Idle Power: Not Reported 00:12:58.381 Active Power: Not Reported 00:12:58.381 Non-Operational Permissive Mode: Not Supported 00:12:58.381 00:12:58.381 Health Information 00:12:58.381 ================== 00:12:58.381 Critical Warnings: 00:12:58.381 Available Spare Space: OK 00:12:58.381 Temperature: OK 00:12:58.381 Device Reliability: OK 00:12:58.381 Read Only: No 00:12:58.381 Volatile Memory Backup: OK 00:12:58.381 Current Temperature: 323 Kelvin (50 Celsius) 00:12:58.381 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:58.381 Available Spare: 0% 00:12:58.381 Available Spare Threshold: 0% 00:12:58.381 Life Percentage Used: 0% 00:12:58.381 Data Units Read: 765 00:12:58.381 Data Units Written: 658 00:12:58.381 Host Read Commands: 33308 00:12:58.381 Host Write Commands: 31898 00:12:58.381 Controller Busy Time: 0 minutes 00:12:58.381 Power Cycles: 0 00:12:58.381 Power On Hours: 0 hours 00:12:58.381 Unsafe Shutdowns: 0 00:12:58.381 Unrecoverable Media Errors: 0 00:12:58.381 Lifetime Error Log Entries: 0 00:12:58.381 Warning Temperature Time: 0 minutes 00:12:58.381 Critical Temperature Time: 0 minutes 00:12:58.381 00:12:58.381 Number of Queues 00:12:58.381 ================ 00:12:58.381 Number of I/O Submission Queues: 64 00:12:58.381 Number of I/O Completion Queues: 64 00:12:58.381 00:12:58.381 ZNS Specific Controller Data 00:12:58.381 ============================ 00:12:58.381 Zone Append Size Limit: 0 00:12:58.381 00:12:58.381 00:12:58.381 Active Namespaces 00:12:58.381 ================= 00:12:58.381 Namespace ID:1 00:12:58.381 Error Recovery Timeout: Unlimited 00:12:58.381 Command Set Identifier: NVM (00h) 00:12:58.381 Deallocate: Supported 00:12:58.381 Deallocated/Unwritten Error: Supported 00:12:58.381 Deallocated Read Value: All 0x00 00:12:58.381 Deallocate in Write Zeroes: Not Supported 00:12:58.381 Deallocated Guard Field: 0xFFFF 00:12:58.381 Flush: Supported 00:12:58.381 Reservation: Not Supported 00:12:58.381 Namespace Sharing Capabilities: Multiple Controllers 00:12:58.381 Size (in LBAs): 262144 (1GiB) 00:12:58.381 Capacity (in LBAs): 262144 (1GiB) 00:12:58.381 Utilization (in LBAs): 262144 (1GiB) 00:12:58.381 Thin Provisioning: Not Supported 00:12:58.381 Per-NS Atomic Units: No 00:12:58.381 Maximum Single Source Range Length: 128 00:12:58.381 Maximum Copy Length: 128 00:12:58.381 Maximum Source Range Count: 128 00:12:58.381 NGUID/EUI64 Never Reused: No 00:12:58.381 Namespace Write Protected: No 00:12:58.381 Endurance group ID: 1 00:12:58.381 Number of LBA Formats: 8 00:12:58.381 Current LBA Format: LBA Format #04 00:12:58.381 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:58.381 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:58.381 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:58.381 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:58.381 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:58.381 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:58.381 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:58.381 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:58.381 00:12:58.381 Get Feature FDP: 00:12:58.381 ================ 00:12:58.381 Enabled: Yes 00:12:58.381 FDP configuration index: 0 00:12:58.381 00:12:58.381 FDP configurations log page 00:12:58.381 =========================== 00:12:58.381 Number of FDP configurations: 1 00:12:58.381 Version: 0 00:12:58.381 Size: 112 00:12:58.382 FDP Configuration Descriptor: 0 00:12:58.382 Descriptor Size: 96 00:12:58.382 Reclaim Group Identifier format: 2 00:12:58.382 FDP Volatile Write Cache: Not Present 00:12:58.382 FDP Configuration: Valid 00:12:58.382 Vendor Specific Size: 0 00:12:58.382 Number of Reclaim Groups: 2 00:12:58.382 Number of Recalim Unit Handles: 8 00:12:58.382 Max Placement Identifiers: 128 00:12:58.382 Number of Namespaces Suppprted: 256 00:12:58.382 Reclaim unit Nominal Size: 6000000 bytes 00:12:58.382 Estimated Reclaim Unit Time Limit: Not Reported 00:12:58.382 RUH Desc #000: RUH Type: Initially Isolated 00:12:58.382 RUH Desc #001: RUH Type: Initially Isolated 00:12:58.382 RUH Desc #002: RUH Type: Initially Isolated 00:12:58.382 RUH Desc #003: RUH Type: Initially Isolated 00:12:58.382 RUH Desc #004: RUH Type: Initially Isolated 00:12:58.382 RUH Desc #005: RUH Type: Initially Isolated 00:12:58.382 RUH Desc #006: RUH Type: Initially Isolated 00:12:58.382 RUH Desc #007: RUH Type: Initially Isolated 00:12:58.382 00:12:58.382 FDP reclaim unit handle usage log page 00:12:58.382 ====================================== 00:12:58.382 Number of Reclaim Unit Handles: 8 00:12:58.382 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:58.382 RUH Usage Desc #001: RUH Attributes: Unused 00:12:58.382 RUH Usage Desc #002: RUH Attributes: Unused 00:12:58.382 RUH Usage Desc #003: RUH Attributes: Unused 00:12:58.382 RUH Usage Desc #004: RUH Attributes: Unused 00:12:58.382 RUH Usage Desc #005: RUH Attributes: Unused 00:12:58.382 RUH Usage Desc #006: RUH Attributes: Unused 00:12:58.382 RUH Usage Desc #007: RUH Attributes: Unused 00:12:58.382 00:12:58.382 FDP statistics log page 00:12:58.382 ======================= 00:12:58.382 Host bytes with metadata written: 406298624 00:12:58.382 Media bytes with metadata written: 406343680 00:12:58.382 Media bytes erased: 0 00:12:58.382 00:12:58.382 FDP events log page 00:12:58.382 =================== 00:12:58.382 Number of FDP events: 0 00:12:58.382 00:12:58.382 NVM Specific Namespace Data 00:12:58.382 =========================== 00:12:58.382 Logical Block Storage Tag Mask: 0 00:12:58.382 Protection Information Capabilities: 00:12:58.382 16b Guard Protection Information Storage Tag Support: No 00:12:58.382 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:58.382 Storage Tag Check Read Support: No 00:12:58.382 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:58.382 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:58.382 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:58.382 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:58.382 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:58.382 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:58.382 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:58.382 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:58.382 00:12:58.382 real 0m1.594s 00:12:58.382 user 0m0.646s 00:12:58.382 sys 0m0.734s 00:12:58.382 13:54:22 nvme.nvme_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:58.382 13:54:22 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:12:58.382 ************************************ 00:12:58.382 END TEST nvme_identify 00:12:58.382 ************************************ 00:12:58.382 13:54:22 nvme -- common/autotest_common.sh@1142 -- # return 0 00:12:58.382 13:54:22 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:12:58.382 13:54:22 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:58.382 13:54:22 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:58.382 13:54:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.382 ************************************ 00:12:58.382 START TEST nvme_perf 00:12:58.382 ************************************ 00:12:58.382 13:54:22 nvme.nvme_perf -- common/autotest_common.sh@1123 -- # nvme_perf 00:12:58.382 13:54:22 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:12:59.843 Initializing NVMe Controllers 00:12:59.843 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:59.843 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:59.843 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:59.843 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:59.843 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:59.843 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:59.843 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:59.843 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:59.843 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:59.843 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:59.843 Initialization complete. Launching workers. 00:12:59.843 ======================================================== 00:12:59.843 Latency(us) 00:12:59.843 Device Information : IOPS MiB/s Average min max 00:12:59.843 PCIE (0000:00:10.0) NSID 1 from core 0: 9842.44 115.34 13071.40 8050.16 36062.12 00:12:59.843 PCIE (0000:00:11.0) NSID 1 from core 0: 9842.44 115.34 13061.59 8365.95 34117.78 00:12:59.843 PCIE (0000:00:13.0) NSID 1 from core 0: 9842.44 115.34 13048.56 8323.34 32796.11 00:12:59.843 PCIE (0000:00:12.0) NSID 1 from core 0: 9842.44 115.34 13034.77 8235.43 30798.65 00:12:59.843 PCIE (0000:00:12.0) NSID 2 from core 0: 9842.44 115.34 13021.38 8218.12 28773.79 00:12:59.843 PCIE (0000:00:12.0) NSID 3 from core 0: 9842.44 115.34 13007.60 8211.77 27501.00 00:12:59.843 ======================================================== 00:12:59.843 Total : 59054.62 692.05 13040.88 8050.16 36062.12 00:12:59.843 00:12:59.843 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:59.843 ================================================================================= 00:12:59.843 1.00000% : 8698.415us 00:12:59.843 10.00000% : 9472.931us 00:12:59.843 25.00000% : 10128.291us 00:12:59.843 50.00000% : 11439.011us 00:12:59.843 75.00000% : 12749.731us 00:12:59.843 90.00000% : 21567.302us 00:12:59.843 95.00000% : 23116.335us 00:12:59.843 98.00000% : 24903.680us 00:12:59.843 99.00000% : 26571.869us 00:12:59.843 99.50000% : 34555.345us 00:12:59.843 99.90000% : 35746.909us 00:12:59.843 99.99000% : 36223.535us 00:12:59.843 99.99900% : 36223.535us 00:12:59.843 99.99990% : 36223.535us 00:12:59.843 99.99999% : 36223.535us 00:12:59.843 00:12:59.843 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:59.843 ================================================================================= 00:12:59.843 1.00000% : 8757.993us 00:12:59.843 10.00000% : 9472.931us 00:12:59.843 25.00000% : 10068.713us 00:12:59.843 50.00000% : 11439.011us 00:12:59.843 75.00000% : 12630.575us 00:12:59.843 90.00000% : 21686.458us 00:12:59.843 95.00000% : 22997.178us 00:12:59.844 98.00000% : 24427.055us 00:12:59.844 99.00000% : 25618.618us 00:12:59.844 99.50000% : 32648.844us 00:12:59.844 99.90000% : 33840.407us 00:12:59.844 99.99000% : 34317.033us 00:12:59.844 99.99900% : 34317.033us 00:12:59.844 99.99990% : 34317.033us 00:12:59.844 99.99999% : 34317.033us 00:12:59.844 00:12:59.844 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:59.844 ================================================================================= 00:12:59.844 1.00000% : 8757.993us 00:12:59.844 10.00000% : 9472.931us 00:12:59.844 25.00000% : 10128.291us 00:12:59.844 50.00000% : 11498.589us 00:12:59.844 75.00000% : 12690.153us 00:12:59.844 90.00000% : 21686.458us 00:12:59.844 95.00000% : 22997.178us 00:12:59.844 98.00000% : 24188.742us 00:12:59.844 99.00000% : 25022.836us 00:12:59.844 99.50000% : 31218.967us 00:12:59.844 99.90000% : 32648.844us 00:12:59.844 99.99000% : 32887.156us 00:12:59.844 99.99900% : 32887.156us 00:12:59.844 99.99990% : 32887.156us 00:12:59.844 99.99999% : 32887.156us 00:12:59.844 00:12:59.844 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:59.844 ================================================================================= 00:12:59.844 1.00000% : 8757.993us 00:12:59.844 10.00000% : 9472.931us 00:12:59.844 25.00000% : 10068.713us 00:12:59.844 50.00000% : 11498.589us 00:12:59.844 75.00000% : 12690.153us 00:12:59.844 90.00000% : 21686.458us 00:12:59.844 95.00000% : 22878.022us 00:12:59.844 98.00000% : 23831.273us 00:12:59.844 99.00000% : 24546.211us 00:12:59.844 99.50000% : 29193.309us 00:12:59.844 99.90000% : 30504.029us 00:12:59.844 99.99000% : 30980.655us 00:12:59.844 99.99900% : 30980.655us 00:12:59.844 99.99990% : 30980.655us 00:12:59.844 99.99999% : 30980.655us 00:12:59.844 00:12:59.844 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:59.844 ================================================================================= 00:12:59.844 1.00000% : 8757.993us 00:12:59.844 10.00000% : 9472.931us 00:12:59.844 25.00000% : 10068.713us 00:12:59.844 50.00000% : 11498.589us 00:12:59.844 75.00000% : 12690.153us 00:12:59.844 90.00000% : 21686.458us 00:12:59.844 95.00000% : 22758.865us 00:12:59.844 98.00000% : 23831.273us 00:12:59.844 99.00000% : 24546.211us 00:12:59.844 99.50000% : 27286.807us 00:12:59.844 99.90000% : 28597.527us 00:12:59.844 99.99000% : 28835.840us 00:12:59.844 99.99900% : 28835.840us 00:12:59.844 99.99990% : 28835.840us 00:12:59.844 99.99999% : 28835.840us 00:12:59.844 00:12:59.844 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:59.844 ================================================================================= 00:12:59.844 1.00000% : 8757.993us 00:12:59.844 10.00000% : 9472.931us 00:12:59.844 25.00000% : 10068.713us 00:12:59.844 50.00000% : 11498.589us 00:12:59.844 75.00000% : 12690.153us 00:12:59.844 90.00000% : 21567.302us 00:12:59.844 95.00000% : 22758.865us 00:12:59.844 98.00000% : 23831.273us 00:12:59.844 99.00000% : 25022.836us 00:12:59.844 99.50000% : 25976.087us 00:12:59.844 99.90000% : 27286.807us 00:12:59.844 99.99000% : 27525.120us 00:12:59.844 99.99900% : 27525.120us 00:12:59.844 99.99990% : 27525.120us 00:12:59.844 99.99999% : 27525.120us 00:12:59.844 00:12:59.844 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:59.844 ============================================================================== 00:12:59.844 Range in us Cumulative IO count 00:12:59.844 8043.055 - 8102.633: 0.0304% ( 3) 00:12:59.844 8102.633 - 8162.211: 0.0609% ( 3) 00:12:59.844 8162.211 - 8221.789: 0.1015% ( 4) 00:12:59.844 8221.789 - 8281.367: 0.1725% ( 7) 00:12:59.844 8281.367 - 8340.945: 0.2435% ( 7) 00:12:59.844 8340.945 - 8400.524: 0.3653% ( 12) 00:12:59.844 8400.524 - 8460.102: 0.4972% ( 13) 00:12:59.844 8460.102 - 8519.680: 0.6392% ( 14) 00:12:59.844 8519.680 - 8579.258: 0.7914% ( 15) 00:12:59.844 8579.258 - 8638.836: 0.9740% ( 18) 00:12:59.844 8638.836 - 8698.415: 1.2378% ( 26) 00:12:59.844 8698.415 - 8757.993: 1.6234% ( 38) 00:12:59.844 8757.993 - 8817.571: 2.0495% ( 42) 00:12:59.844 8817.571 - 8877.149: 2.5061% ( 45) 00:12:59.844 8877.149 - 8936.727: 3.0540% ( 54) 00:12:59.844 8936.727 - 8996.305: 3.7338% ( 67) 00:12:59.844 8996.305 - 9055.884: 4.4947% ( 75) 00:12:59.844 9055.884 - 9115.462: 5.3064% ( 80) 00:12:59.844 9115.462 - 9175.040: 6.0775% ( 76) 00:12:59.844 9175.040 - 9234.618: 6.8994% ( 81) 00:12:59.844 9234.618 - 9294.196: 7.7516% ( 84) 00:12:59.844 9294.196 - 9353.775: 8.6952% ( 93) 00:12:59.844 9353.775 - 9413.353: 9.7403% ( 103) 00:12:59.844 9413.353 - 9472.931: 11.0288% ( 127) 00:12:59.844 9472.931 - 9532.509: 12.2463% ( 120) 00:12:59.844 9532.509 - 9592.087: 13.5653% ( 130) 00:12:59.844 9592.087 - 9651.665: 14.8843% ( 130) 00:12:59.844 9651.665 - 9711.244: 16.3149% ( 141) 00:12:59.844 9711.244 - 9770.822: 17.6238% ( 129) 00:12:59.844 9770.822 - 9830.400: 18.9022% ( 126) 00:12:59.844 9830.400 - 9889.978: 20.3734% ( 145) 00:12:59.844 9889.978 - 9949.556: 21.7532% ( 136) 00:12:59.844 9949.556 - 10009.135: 23.2650% ( 149) 00:12:59.844 10009.135 - 10068.713: 24.6652% ( 138) 00:12:59.844 10068.713 - 10128.291: 26.0755% ( 139) 00:12:59.844 10128.291 - 10187.869: 27.3539% ( 126) 00:12:59.844 10187.869 - 10247.447: 28.7135% ( 134) 00:12:59.844 10247.447 - 10307.025: 29.9614% ( 123) 00:12:59.844 10307.025 - 10366.604: 31.1485% ( 117) 00:12:59.844 10366.604 - 10426.182: 32.2545% ( 109) 00:12:59.844 10426.182 - 10485.760: 33.4821% ( 121) 00:12:59.844 10485.760 - 10545.338: 34.4257% ( 93) 00:12:59.844 10545.338 - 10604.916: 35.3287% ( 89) 00:12:59.844 10604.916 - 10664.495: 36.2723% ( 93) 00:12:59.844 10664.495 - 10724.073: 37.3275% ( 104) 00:12:59.844 10724.073 - 10783.651: 38.1696% ( 83) 00:12:59.844 10783.651 - 10843.229: 39.2045% ( 102) 00:12:59.844 10843.229 - 10902.807: 40.1887% ( 97) 00:12:59.844 10902.807 - 10962.385: 41.1627% ( 96) 00:12:59.844 10962.385 - 11021.964: 42.2991% ( 112) 00:12:59.844 11021.964 - 11081.542: 43.4355% ( 112) 00:12:59.844 11081.542 - 11141.120: 44.7139% ( 126) 00:12:59.844 11141.120 - 11200.698: 45.8705% ( 114) 00:12:59.844 11200.698 - 11260.276: 47.1591% ( 127) 00:12:59.844 11260.276 - 11319.855: 48.3665% ( 119) 00:12:59.844 11319.855 - 11379.433: 49.5130% ( 113) 00:12:59.844 11379.433 - 11439.011: 50.6392% ( 111) 00:12:59.844 11439.011 - 11498.589: 51.8060% ( 115) 00:12:59.844 11498.589 - 11558.167: 52.9119% ( 109) 00:12:59.844 11558.167 - 11617.745: 54.2005% ( 127) 00:12:59.844 11617.745 - 11677.324: 55.4992% ( 128) 00:12:59.844 11677.324 - 11736.902: 56.9298% ( 141) 00:12:59.844 11736.902 - 11796.480: 58.2894% ( 134) 00:12:59.844 11796.480 - 11856.058: 59.7200% ( 141) 00:12:59.844 11856.058 - 11915.636: 61.2013% ( 146) 00:12:59.844 11915.636 - 11975.215: 62.4696% ( 125) 00:12:59.844 11975.215 - 12034.793: 63.8190% ( 133) 00:12:59.844 12034.793 - 12094.371: 65.0670% ( 123) 00:12:59.844 12094.371 - 12153.949: 66.2744% ( 119) 00:12:59.844 12153.949 - 12213.527: 67.4107% ( 112) 00:12:59.844 12213.527 - 12273.105: 68.4050% ( 98) 00:12:59.844 12273.105 - 12332.684: 69.4399% ( 102) 00:12:59.844 12332.684 - 12392.262: 70.4647% ( 101) 00:12:59.844 12392.262 - 12451.840: 71.4286% ( 95) 00:12:59.844 12451.840 - 12511.418: 72.2606% ( 82) 00:12:59.844 12511.418 - 12570.996: 73.1433% ( 87) 00:12:59.844 12570.996 - 12630.575: 73.9550% ( 80) 00:12:59.844 12630.575 - 12690.153: 74.6956% ( 73) 00:12:59.844 12690.153 - 12749.731: 75.3856% ( 68) 00:12:59.844 12749.731 - 12809.309: 76.0248% ( 63) 00:12:59.844 12809.309 - 12868.887: 76.5219% ( 49) 00:12:59.844 12868.887 - 12928.465: 76.9075% ( 38) 00:12:59.844 12928.465 - 12988.044: 77.1307% ( 22) 00:12:59.844 12988.044 - 13047.622: 77.3843% ( 25) 00:12:59.844 13047.622 - 13107.200: 77.5974% ( 21) 00:12:59.844 13107.200 - 13166.778: 77.7800% ( 18) 00:12:59.844 13166.778 - 13226.356: 77.9830% ( 20) 00:12:59.844 13226.356 - 13285.935: 78.1554% ( 17) 00:12:59.844 13285.935 - 13345.513: 78.3584% ( 20) 00:12:59.844 13345.513 - 13405.091: 78.5511% ( 19) 00:12:59.844 13405.091 - 13464.669: 78.6729% ( 12) 00:12:59.844 13464.669 - 13524.247: 78.8352% ( 16) 00:12:59.844 13524.247 - 13583.825: 78.9976% ( 16) 00:12:59.844 13583.825 - 13643.404: 79.1193% ( 12) 00:12:59.844 13643.404 - 13702.982: 79.2817% ( 16) 00:12:59.844 13702.982 - 13762.560: 79.4034% ( 12) 00:12:59.844 13762.560 - 13822.138: 79.5455% ( 14) 00:12:59.844 13822.138 - 13881.716: 79.6571% ( 11) 00:12:59.844 13881.716 - 13941.295: 79.7382% ( 8) 00:12:59.844 13941.295 - 14000.873: 79.7788% ( 4) 00:12:59.844 14000.873 - 14060.451: 79.8295% ( 5) 00:12:59.844 14060.451 - 14120.029: 79.8600% ( 3) 00:12:59.844 14120.029 - 14179.607: 79.9209% ( 6) 00:12:59.844 14179.607 - 14239.185: 79.9513% ( 3) 00:12:59.844 14239.185 - 14298.764: 79.9919% ( 4) 00:12:59.844 14298.764 - 14358.342: 80.0325% ( 4) 00:12:59.844 14358.342 - 14417.920: 80.0426% ( 1) 00:12:59.844 14417.920 - 14477.498: 80.0629% ( 2) 00:12:59.844 14477.498 - 14537.076: 80.0832% ( 2) 00:12:59.844 14537.076 - 14596.655: 80.1035% ( 2) 00:12:59.844 14596.655 - 14656.233: 80.1441% ( 4) 00:12:59.844 14656.233 - 14715.811: 80.1745% ( 3) 00:12:59.844 14715.811 - 14775.389: 80.2050% ( 3) 00:12:59.844 14775.389 - 14834.967: 80.2354% ( 3) 00:12:59.844 14834.967 - 14894.545: 80.2658% ( 3) 00:12:59.844 14894.545 - 14954.124: 80.2861% ( 2) 00:12:59.844 14954.124 - 15013.702: 80.3267% ( 4) 00:12:59.844 15013.702 - 15073.280: 80.3571% ( 3) 00:12:59.844 15073.280 - 15132.858: 80.3673% ( 1) 00:12:59.844 15132.858 - 15192.436: 80.4079% ( 4) 00:12:59.844 15192.436 - 15252.015: 80.4485% ( 4) 00:12:59.844 15252.015 - 15371.171: 80.5296% ( 8) 00:12:59.844 15371.171 - 15490.327: 80.5905% ( 6) 00:12:59.844 15490.327 - 15609.484: 80.6818% ( 9) 00:12:59.844 15609.484 - 15728.640: 80.7731% ( 9) 00:12:59.844 15728.640 - 15847.796: 80.8442% ( 7) 00:12:59.844 15847.796 - 15966.953: 80.8949% ( 5) 00:12:59.844 15966.953 - 16086.109: 80.9456% ( 5) 00:12:59.844 16086.109 - 16205.265: 81.0065% ( 6) 00:12:59.844 16205.265 - 16324.422: 81.0978% ( 9) 00:12:59.844 16324.422 - 16443.578: 81.1587% ( 6) 00:12:59.844 16443.578 - 16562.735: 81.1688% ( 1) 00:12:59.844 16562.735 - 16681.891: 81.1891% ( 2) 00:12:59.844 16681.891 - 16801.047: 81.2196% ( 3) 00:12:59.844 16801.047 - 16920.204: 81.2399% ( 2) 00:12:59.844 16920.204 - 17039.360: 81.2703% ( 3) 00:12:59.844 17039.360 - 17158.516: 81.3007% ( 3) 00:12:59.844 17158.516 - 17277.673: 81.3616% ( 6) 00:12:59.844 17277.673 - 17396.829: 81.4022% ( 4) 00:12:59.844 17396.829 - 17515.985: 81.4631% ( 6) 00:12:59.844 17515.985 - 17635.142: 81.5442% ( 8) 00:12:59.844 17635.142 - 17754.298: 81.6254% ( 8) 00:12:59.844 17754.298 - 17873.455: 81.6964% ( 7) 00:12:59.844 17873.455 - 17992.611: 81.7776% ( 8) 00:12:59.844 17992.611 - 18111.767: 81.8385% ( 6) 00:12:59.844 18111.767 - 18230.924: 81.9298% ( 9) 00:12:59.844 18230.924 - 18350.080: 82.0110% ( 8) 00:12:59.844 18350.080 - 18469.236: 82.0820% ( 7) 00:12:59.844 18469.236 - 18588.393: 82.1936% ( 11) 00:12:59.844 18588.393 - 18707.549: 82.2849% ( 9) 00:12:59.844 18707.549 - 18826.705: 82.3864% ( 10) 00:12:59.844 18826.705 - 18945.862: 82.5183% ( 13) 00:12:59.844 18945.862 - 19065.018: 82.6907% ( 17) 00:12:59.844 19065.018 - 19184.175: 82.8835% ( 19) 00:12:59.844 19184.175 - 19303.331: 83.1169% ( 23) 00:12:59.844 19303.331 - 19422.487: 83.2995% ( 18) 00:12:59.844 19422.487 - 19541.644: 83.5532% ( 25) 00:12:59.844 19541.644 - 19660.800: 83.8677% ( 31) 00:12:59.844 19660.800 - 19779.956: 84.1721% ( 30) 00:12:59.844 19779.956 - 19899.113: 84.4257% ( 25) 00:12:59.844 19899.113 - 20018.269: 84.7403% ( 31) 00:12:59.844 20018.269 - 20137.425: 85.0751% ( 33) 00:12:59.844 20137.425 - 20256.582: 85.4302% ( 35) 00:12:59.844 20256.582 - 20375.738: 85.8462% ( 41) 00:12:59.844 20375.738 - 20494.895: 86.2317% ( 38) 00:12:59.844 20494.895 - 20614.051: 86.7188% ( 48) 00:12:59.844 20614.051 - 20733.207: 87.0840% ( 36) 00:12:59.844 20733.207 - 20852.364: 87.5101% ( 42) 00:12:59.844 20852.364 - 20971.520: 87.9363% ( 42) 00:12:59.844 20971.520 - 21090.676: 88.3117% ( 37) 00:12:59.844 21090.676 - 21209.833: 88.7784% ( 46) 00:12:59.844 21209.833 - 21328.989: 89.2147% ( 43) 00:12:59.844 21328.989 - 21448.145: 89.6611% ( 44) 00:12:59.844 21448.145 - 21567.302: 90.0873% ( 42) 00:12:59.844 21567.302 - 21686.458: 90.4830% ( 39) 00:12:59.844 21686.458 - 21805.615: 90.8888% ( 40) 00:12:59.844 21805.615 - 21924.771: 91.2744% ( 38) 00:12:59.844 21924.771 - 22043.927: 91.6599% ( 38) 00:12:59.844 22043.927 - 22163.084: 92.1063% ( 44) 00:12:59.844 22163.084 - 22282.240: 92.5223% ( 41) 00:12:59.844 22282.240 - 22401.396: 92.8470% ( 32) 00:12:59.844 22401.396 - 22520.553: 93.3239% ( 47) 00:12:59.844 22520.553 - 22639.709: 93.6891% ( 36) 00:12:59.844 22639.709 - 22758.865: 94.1153% ( 42) 00:12:59.844 22758.865 - 22878.022: 94.4704% ( 35) 00:12:59.844 22878.022 - 22997.178: 94.8356% ( 36) 00:12:59.844 22997.178 - 23116.335: 95.1197% ( 28) 00:12:59.844 23116.335 - 23235.491: 95.4241% ( 30) 00:12:59.844 23235.491 - 23354.647: 95.6981% ( 27) 00:12:59.844 23354.647 - 23473.804: 96.0227% ( 32) 00:12:59.844 23473.804 - 23592.960: 96.2459% ( 22) 00:12:59.844 23592.960 - 23712.116: 96.4590% ( 21) 00:12:59.844 23712.116 - 23831.273: 96.7532% ( 29) 00:12:59.844 23831.273 - 23950.429: 96.9663% ( 21) 00:12:59.844 23950.429 - 24069.585: 97.2098% ( 24) 00:12:59.844 24069.585 - 24188.742: 97.4026% ( 19) 00:12:59.844 24188.742 - 24307.898: 97.5548% ( 15) 00:12:59.844 24307.898 - 24427.055: 97.6765% ( 12) 00:12:59.844 24427.055 - 24546.211: 97.8186% ( 14) 00:12:59.844 24546.211 - 24665.367: 97.9200% ( 10) 00:12:59.844 24665.367 - 24784.524: 97.9809% ( 6) 00:12:59.844 24784.524 - 24903.680: 98.0418% ( 6) 00:12:59.844 24903.680 - 25022.836: 98.1128% ( 7) 00:12:59.844 25022.836 - 25141.993: 98.1737% ( 6) 00:12:59.844 25141.993 - 25261.149: 98.2346% ( 6) 00:12:59.844 25261.149 - 25380.305: 98.3056% ( 7) 00:12:59.844 25380.305 - 25499.462: 98.3665% ( 6) 00:12:59.844 25499.462 - 25618.618: 98.4476% ( 8) 00:12:59.844 25618.618 - 25737.775: 98.5288% ( 8) 00:12:59.844 25737.775 - 25856.931: 98.6201% ( 9) 00:12:59.845 25856.931 - 25976.087: 98.7114% ( 9) 00:12:59.845 25976.087 - 26095.244: 98.7926% ( 8) 00:12:59.845 26095.244 - 26214.400: 98.8738% ( 8) 00:12:59.845 26214.400 - 26333.556: 98.9651% ( 9) 00:12:59.845 26333.556 - 26452.713: 98.9955% ( 3) 00:12:59.845 26452.713 - 26571.869: 99.0361% ( 4) 00:12:59.845 26571.869 - 26691.025: 99.0767% ( 4) 00:12:59.845 26691.025 - 26810.182: 99.1173% ( 4) 00:12:59.845 26810.182 - 26929.338: 99.1579% ( 4) 00:12:59.845 26929.338 - 27048.495: 99.1883% ( 3) 00:12:59.845 27048.495 - 27167.651: 99.2188% ( 3) 00:12:59.845 27167.651 - 27286.807: 99.2593% ( 4) 00:12:59.845 27286.807 - 27405.964: 99.3101% ( 5) 00:12:59.845 27405.964 - 27525.120: 99.3405% ( 3) 00:12:59.845 27525.120 - 27644.276: 99.3506% ( 1) 00:12:59.845 33840.407 - 34078.720: 99.4115% ( 6) 00:12:59.845 34078.720 - 34317.033: 99.4927% ( 8) 00:12:59.845 34317.033 - 34555.345: 99.5637% ( 7) 00:12:59.845 34555.345 - 34793.658: 99.6347% ( 7) 00:12:59.845 34793.658 - 35031.971: 99.6855% ( 5) 00:12:59.845 35031.971 - 35270.284: 99.7565% ( 7) 00:12:59.845 35270.284 - 35508.596: 99.8377% ( 8) 00:12:59.845 35508.596 - 35746.909: 99.9087% ( 7) 00:12:59.845 35746.909 - 35985.222: 99.9797% ( 7) 00:12:59.845 35985.222 - 36223.535: 100.0000% ( 2) 00:12:59.845 00:12:59.845 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:59.845 ============================================================================== 00:12:59.845 Range in us Cumulative IO count 00:12:59.845 8340.945 - 8400.524: 0.0507% ( 5) 00:12:59.845 8400.524 - 8460.102: 0.1319% ( 8) 00:12:59.845 8460.102 - 8519.680: 0.2638% ( 13) 00:12:59.845 8519.680 - 8579.258: 0.4058% ( 14) 00:12:59.845 8579.258 - 8638.836: 0.5377% ( 13) 00:12:59.845 8638.836 - 8698.415: 0.7508% ( 21) 00:12:59.845 8698.415 - 8757.993: 1.0248% ( 27) 00:12:59.845 8757.993 - 8817.571: 1.4002% ( 37) 00:12:59.845 8817.571 - 8877.149: 1.8060% ( 40) 00:12:59.845 8877.149 - 8936.727: 2.3438% ( 53) 00:12:59.845 8936.727 - 8996.305: 2.8916% ( 54) 00:12:59.845 8996.305 - 9055.884: 3.4903% ( 59) 00:12:59.845 9055.884 - 9115.462: 4.2106% ( 71) 00:12:59.845 9115.462 - 9175.040: 5.0933% ( 87) 00:12:59.845 9175.040 - 9234.618: 6.0775% ( 97) 00:12:59.845 9234.618 - 9294.196: 7.0820% ( 99) 00:12:59.845 9294.196 - 9353.775: 8.1575% ( 106) 00:12:59.845 9353.775 - 9413.353: 9.3344% ( 116) 00:12:59.845 9413.353 - 9472.931: 10.6128% ( 126) 00:12:59.845 9472.931 - 9532.509: 11.9521% ( 132) 00:12:59.845 9532.509 - 9592.087: 13.3827% ( 141) 00:12:59.845 9592.087 - 9651.665: 14.8235% ( 142) 00:12:59.845 9651.665 - 9711.244: 16.3352% ( 149) 00:12:59.845 9711.244 - 9770.822: 17.8267% ( 147) 00:12:59.845 9770.822 - 9830.400: 19.3283% ( 148) 00:12:59.845 9830.400 - 9889.978: 20.8604% ( 151) 00:12:59.845 9889.978 - 9949.556: 22.3722% ( 149) 00:12:59.845 9949.556 - 10009.135: 23.8129% ( 142) 00:12:59.845 10009.135 - 10068.713: 25.1623% ( 133) 00:12:59.845 10068.713 - 10128.291: 26.4813% ( 130) 00:12:59.845 10128.291 - 10187.869: 27.8409% ( 134) 00:12:59.845 10187.869 - 10247.447: 29.1700% ( 131) 00:12:59.845 10247.447 - 10307.025: 30.3876% ( 120) 00:12:59.845 10307.025 - 10366.604: 31.5645% ( 116) 00:12:59.845 10366.604 - 10426.182: 32.5284% ( 95) 00:12:59.845 10426.182 - 10485.760: 33.4619% ( 92) 00:12:59.845 10485.760 - 10545.338: 34.4054% ( 93) 00:12:59.845 10545.338 - 10604.916: 35.1765% ( 76) 00:12:59.845 10604.916 - 10664.495: 35.8563% ( 67) 00:12:59.845 10664.495 - 10724.073: 36.6579% ( 79) 00:12:59.845 10724.073 - 10783.651: 37.6826% ( 101) 00:12:59.845 10783.651 - 10843.229: 38.7277% ( 103) 00:12:59.845 10843.229 - 10902.807: 39.8843% ( 114) 00:12:59.845 10902.807 - 10962.385: 41.0004% ( 110) 00:12:59.845 10962.385 - 11021.964: 42.0556% ( 104) 00:12:59.845 11021.964 - 11081.542: 43.0804% ( 101) 00:12:59.845 11081.542 - 11141.120: 44.2370% ( 114) 00:12:59.845 11141.120 - 11200.698: 45.4241% ( 117) 00:12:59.845 11200.698 - 11260.276: 46.6518% ( 121) 00:12:59.845 11260.276 - 11319.855: 47.9505% ( 128) 00:12:59.845 11319.855 - 11379.433: 49.2188% ( 125) 00:12:59.845 11379.433 - 11439.011: 50.5073% ( 127) 00:12:59.845 11439.011 - 11498.589: 51.6944% ( 117) 00:12:59.845 11498.589 - 11558.167: 53.0032% ( 129) 00:12:59.845 11558.167 - 11617.745: 54.2106% ( 119) 00:12:59.845 11617.745 - 11677.324: 55.5499% ( 132) 00:12:59.845 11677.324 - 11736.902: 57.0515% ( 148) 00:12:59.845 11736.902 - 11796.480: 58.5329% ( 146) 00:12:59.845 11796.480 - 11856.058: 60.1157% ( 156) 00:12:59.845 11856.058 - 11915.636: 61.6985% ( 156) 00:12:59.845 11915.636 - 11975.215: 63.1088% ( 139) 00:12:59.845 11975.215 - 12034.793: 64.4379% ( 131) 00:12:59.845 12034.793 - 12094.371: 65.7772% ( 132) 00:12:59.845 12094.371 - 12153.949: 67.1469% ( 135) 00:12:59.845 12153.949 - 12213.527: 68.4050% ( 124) 00:12:59.845 12213.527 - 12273.105: 69.6631% ( 124) 00:12:59.845 12273.105 - 12332.684: 70.7792% ( 110) 00:12:59.845 12332.684 - 12392.262: 71.8547% ( 106) 00:12:59.845 12392.262 - 12451.840: 72.7983% ( 93) 00:12:59.845 12451.840 - 12511.418: 73.7114% ( 90) 00:12:59.845 12511.418 - 12570.996: 74.5637% ( 84) 00:12:59.845 12570.996 - 12630.575: 75.4566% ( 88) 00:12:59.845 12630.575 - 12690.153: 76.1465% ( 68) 00:12:59.845 12690.153 - 12749.731: 76.7553% ( 60) 00:12:59.845 12749.731 - 12809.309: 77.1510% ( 39) 00:12:59.845 12809.309 - 12868.887: 77.4046% ( 25) 00:12:59.845 12868.887 - 12928.465: 77.5771% ( 17) 00:12:59.845 12928.465 - 12988.044: 77.7394% ( 16) 00:12:59.845 12988.044 - 13047.622: 77.9322% ( 19) 00:12:59.845 13047.622 - 13107.200: 78.1250% ( 19) 00:12:59.845 13107.200 - 13166.778: 78.2873% ( 16) 00:12:59.845 13166.778 - 13226.356: 78.4598% ( 17) 00:12:59.845 13226.356 - 13285.935: 78.6120% ( 15) 00:12:59.845 13285.935 - 13345.513: 78.7642% ( 15) 00:12:59.845 13345.513 - 13405.091: 78.9367% ( 17) 00:12:59.845 13405.091 - 13464.669: 79.1092% ( 17) 00:12:59.845 13464.669 - 13524.247: 79.2715% ( 16) 00:12:59.845 13524.247 - 13583.825: 79.4034% ( 13) 00:12:59.845 13583.825 - 13643.404: 79.4846% ( 8) 00:12:59.845 13643.404 - 13702.982: 79.5759% ( 9) 00:12:59.845 13702.982 - 13762.560: 79.6469% ( 7) 00:12:59.845 13762.560 - 13822.138: 79.7078% ( 6) 00:12:59.845 13822.138 - 13881.716: 79.7382% ( 3) 00:12:59.845 13881.716 - 13941.295: 79.7788% ( 4) 00:12:59.845 13941.295 - 14000.873: 79.8093% ( 3) 00:12:59.845 14000.873 - 14060.451: 79.8397% ( 3) 00:12:59.845 14060.451 - 14120.029: 79.8701% ( 3) 00:12:59.845 15013.702 - 15073.280: 79.9006% ( 3) 00:12:59.845 15073.280 - 15132.858: 79.9310% ( 3) 00:12:59.845 15132.858 - 15192.436: 79.9412% ( 1) 00:12:59.845 15192.436 - 15252.015: 79.9716% ( 3) 00:12:59.845 15252.015 - 15371.171: 80.0629% ( 9) 00:12:59.845 15371.171 - 15490.327: 80.1542% ( 9) 00:12:59.845 15490.327 - 15609.484: 80.2455% ( 9) 00:12:59.845 15609.484 - 15728.640: 80.3369% ( 9) 00:12:59.845 15728.640 - 15847.796: 80.4180% ( 8) 00:12:59.845 15847.796 - 15966.953: 80.4890% ( 7) 00:12:59.845 15966.953 - 16086.109: 80.5499% ( 6) 00:12:59.845 16086.109 - 16205.265: 80.6412% ( 9) 00:12:59.845 16205.265 - 16324.422: 80.7224% ( 8) 00:12:59.845 16324.422 - 16443.578: 80.8239% ( 10) 00:12:59.845 16443.578 - 16562.735: 80.8847% ( 6) 00:12:59.845 16562.735 - 16681.891: 80.9253% ( 4) 00:12:59.845 16681.891 - 16801.047: 80.9558% ( 3) 00:12:59.845 16801.047 - 16920.204: 80.9862% ( 3) 00:12:59.845 16920.204 - 17039.360: 81.0166% ( 3) 00:12:59.845 17039.360 - 17158.516: 81.0471% ( 3) 00:12:59.845 17158.516 - 17277.673: 81.0877% ( 4) 00:12:59.845 17277.673 - 17396.829: 81.1181% ( 3) 00:12:59.845 17396.829 - 17515.985: 81.1485% ( 3) 00:12:59.845 17515.985 - 17635.142: 81.1688% ( 2) 00:12:59.845 17992.611 - 18111.767: 81.1993% ( 3) 00:12:59.845 18111.767 - 18230.924: 81.2804% ( 8) 00:12:59.845 18230.924 - 18350.080: 81.3819% ( 10) 00:12:59.845 18350.080 - 18469.236: 81.4732% ( 9) 00:12:59.845 18469.236 - 18588.393: 81.5747% ( 10) 00:12:59.845 18588.393 - 18707.549: 81.6863% ( 11) 00:12:59.845 18707.549 - 18826.705: 81.8080% ( 12) 00:12:59.845 18826.705 - 18945.862: 81.9501% ( 14) 00:12:59.845 18945.862 - 19065.018: 82.0921% ( 14) 00:12:59.845 19065.018 - 19184.175: 82.2443% ( 15) 00:12:59.845 19184.175 - 19303.331: 82.3965% ( 15) 00:12:59.845 19303.331 - 19422.487: 82.5791% ( 18) 00:12:59.845 19422.487 - 19541.644: 82.8937% ( 31) 00:12:59.845 19541.644 - 19660.800: 83.2488% ( 35) 00:12:59.845 19660.800 - 19779.956: 83.6039% ( 35) 00:12:59.845 19779.956 - 19899.113: 83.9793% ( 37) 00:12:59.845 19899.113 - 20018.269: 84.3344% ( 35) 00:12:59.845 20018.269 - 20137.425: 84.6692% ( 33) 00:12:59.845 20137.425 - 20256.582: 85.0345% ( 36) 00:12:59.845 20256.582 - 20375.738: 85.3896% ( 35) 00:12:59.845 20375.738 - 20494.895: 85.7549% ( 36) 00:12:59.845 20494.895 - 20614.051: 86.1303% ( 37) 00:12:59.845 20614.051 - 20733.207: 86.5361% ( 40) 00:12:59.845 20733.207 - 20852.364: 86.9724% ( 43) 00:12:59.845 20852.364 - 20971.520: 87.3985% ( 42) 00:12:59.845 20971.520 - 21090.676: 87.8450% ( 44) 00:12:59.845 21090.676 - 21209.833: 88.3218% ( 47) 00:12:59.845 21209.833 - 21328.989: 88.8088% ( 48) 00:12:59.845 21328.989 - 21448.145: 89.3263% ( 51) 00:12:59.845 21448.145 - 21567.302: 89.8539% ( 52) 00:12:59.845 21567.302 - 21686.458: 90.3612% ( 50) 00:12:59.845 21686.458 - 21805.615: 90.8178% ( 45) 00:12:59.845 21805.615 - 21924.771: 91.3048% ( 48) 00:12:59.845 21924.771 - 22043.927: 91.8324% ( 52) 00:12:59.845 22043.927 - 22163.084: 92.3093% ( 47) 00:12:59.845 22163.084 - 22282.240: 92.8673% ( 55) 00:12:59.845 22282.240 - 22401.396: 93.3239% ( 45) 00:12:59.845 22401.396 - 22520.553: 93.7804% ( 45) 00:12:59.845 22520.553 - 22639.709: 94.1964% ( 41) 00:12:59.845 22639.709 - 22758.865: 94.6124% ( 41) 00:12:59.845 22758.865 - 22878.022: 94.9980% ( 38) 00:12:59.845 22878.022 - 22997.178: 95.3226% ( 32) 00:12:59.845 22997.178 - 23116.335: 95.6372% ( 31) 00:12:59.845 23116.335 - 23235.491: 95.9213% ( 28) 00:12:59.845 23235.491 - 23354.647: 96.2155% ( 29) 00:12:59.845 23354.647 - 23473.804: 96.4793% ( 26) 00:12:59.845 23473.804 - 23592.960: 96.7532% ( 27) 00:12:59.845 23592.960 - 23712.116: 97.0069% ( 25) 00:12:59.845 23712.116 - 23831.273: 97.2504% ( 24) 00:12:59.845 23831.273 - 23950.429: 97.4533% ( 20) 00:12:59.845 23950.429 - 24069.585: 97.6562% ( 20) 00:12:59.845 24069.585 - 24188.742: 97.8287% ( 17) 00:12:59.845 24188.742 - 24307.898: 97.9403% ( 11) 00:12:59.845 24307.898 - 24427.055: 98.0317% ( 9) 00:12:59.845 24427.055 - 24546.211: 98.1433% ( 11) 00:12:59.845 24546.211 - 24665.367: 98.2650% ( 12) 00:12:59.845 24665.367 - 24784.524: 98.4071% ( 14) 00:12:59.845 24784.524 - 24903.680: 98.5390% ( 13) 00:12:59.845 24903.680 - 25022.836: 98.6607% ( 12) 00:12:59.845 25022.836 - 25141.993: 98.7622% ( 10) 00:12:59.845 25141.993 - 25261.149: 98.8535% ( 9) 00:12:59.845 25261.149 - 25380.305: 98.9144% ( 6) 00:12:59.845 25380.305 - 25499.462: 98.9752% ( 6) 00:12:59.845 25499.462 - 25618.618: 99.0260% ( 5) 00:12:59.845 25618.618 - 25737.775: 99.0869% ( 6) 00:12:59.845 25737.775 - 25856.931: 99.1477% ( 6) 00:12:59.845 25856.931 - 25976.087: 99.1883% ( 4) 00:12:59.845 25976.087 - 26095.244: 99.2289% ( 4) 00:12:59.845 26095.244 - 26214.400: 99.2695% ( 4) 00:12:59.845 26214.400 - 26333.556: 99.3202% ( 5) 00:12:59.845 26333.556 - 26452.713: 99.3506% ( 3) 00:12:59.845 31933.905 - 32172.218: 99.3912% ( 4) 00:12:59.845 32172.218 - 32410.531: 99.4623% ( 7) 00:12:59.845 32410.531 - 32648.844: 99.5333% ( 7) 00:12:59.845 32648.844 - 32887.156: 99.6144% ( 8) 00:12:59.845 32887.156 - 33125.469: 99.6855% ( 7) 00:12:59.845 33125.469 - 33363.782: 99.7666% ( 8) 00:12:59.845 33363.782 - 33602.095: 99.8478% ( 8) 00:12:59.845 33602.095 - 33840.407: 99.9188% ( 7) 00:12:59.845 33840.407 - 34078.720: 99.9899% ( 7) 00:12:59.845 34078.720 - 34317.033: 100.0000% ( 1) 00:12:59.845 00:12:59.845 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:59.845 ============================================================================== 00:12:59.845 Range in us Cumulative IO count 00:12:59.845 8281.367 - 8340.945: 0.0101% ( 1) 00:12:59.845 8340.945 - 8400.524: 0.0507% ( 4) 00:12:59.845 8400.524 - 8460.102: 0.1522% ( 10) 00:12:59.845 8460.102 - 8519.680: 0.3044% ( 15) 00:12:59.845 8519.680 - 8579.258: 0.4261% ( 12) 00:12:59.845 8579.258 - 8638.836: 0.6088% ( 18) 00:12:59.845 8638.836 - 8698.415: 0.8117% ( 20) 00:12:59.845 8698.415 - 8757.993: 1.0958% ( 28) 00:12:59.845 8757.993 - 8817.571: 1.5219% ( 42) 00:12:59.845 8817.571 - 8877.149: 1.9582% ( 43) 00:12:59.845 8877.149 - 8936.727: 2.4655% ( 50) 00:12:59.845 8936.727 - 8996.305: 2.9830% ( 51) 00:12:59.845 8996.305 - 9055.884: 3.7338% ( 74) 00:12:59.845 9055.884 - 9115.462: 4.6063% ( 86) 00:12:59.845 9115.462 - 9175.040: 5.4688% ( 85) 00:12:59.845 9175.040 - 9234.618: 6.4529% ( 97) 00:12:59.845 9234.618 - 9294.196: 7.4675% ( 100) 00:12:59.845 9294.196 - 9353.775: 8.5735% ( 109) 00:12:59.845 9353.775 - 9413.353: 9.7504% ( 116) 00:12:59.845 9413.353 - 9472.931: 11.0694% ( 130) 00:12:59.845 9472.931 - 9532.509: 12.3884% ( 130) 00:12:59.845 9532.509 - 9592.087: 13.7886% ( 138) 00:12:59.845 9592.087 - 9651.665: 15.2192% ( 141) 00:12:59.845 9651.665 - 9711.244: 16.6396% ( 140) 00:12:59.845 9711.244 - 9770.822: 18.1108% ( 145) 00:12:59.845 9770.822 - 9830.400: 19.5820% ( 145) 00:12:59.845 9830.400 - 9889.978: 21.0024% ( 140) 00:12:59.845 9889.978 - 9949.556: 22.4127% ( 139) 00:12:59.845 9949.556 - 10009.135: 23.6303% ( 120) 00:12:59.845 10009.135 - 10068.713: 24.8478% ( 120) 00:12:59.845 10068.713 - 10128.291: 26.0552% ( 119) 00:12:59.845 10128.291 - 10187.869: 27.2119% ( 114) 00:12:59.845 10187.869 - 10247.447: 28.3888% ( 116) 00:12:59.845 10247.447 - 10307.025: 29.5657% ( 116) 00:12:59.845 10307.025 - 10366.604: 30.7224% ( 114) 00:12:59.845 10366.604 - 10426.182: 31.7877% ( 105) 00:12:59.845 10426.182 - 10485.760: 32.7618% ( 96) 00:12:59.845 10485.760 - 10545.338: 33.7155% ( 94) 00:12:59.845 10545.338 - 10604.916: 34.6287% ( 90) 00:12:59.845 10604.916 - 10664.495: 35.5824% ( 94) 00:12:59.845 10664.495 - 10724.073: 36.5463% ( 95) 00:12:59.845 10724.073 - 10783.651: 37.5101% ( 95) 00:12:59.845 10783.651 - 10843.229: 38.4030% ( 88) 00:12:59.845 10843.229 - 10902.807: 39.3567% ( 94) 00:12:59.845 10902.807 - 10962.385: 40.3206% ( 95) 00:12:59.845 10962.385 - 11021.964: 41.4062% ( 107) 00:12:59.846 11021.964 - 11081.542: 42.5020% ( 108) 00:12:59.846 11081.542 - 11141.120: 43.6384% ( 112) 00:12:59.846 11141.120 - 11200.698: 44.8661% ( 121) 00:12:59.846 11200.698 - 11260.276: 46.0735% ( 119) 00:12:59.846 11260.276 - 11319.855: 47.3316% ( 124) 00:12:59.846 11319.855 - 11379.433: 48.5897% ( 124) 00:12:59.846 11379.433 - 11439.011: 49.8275% ( 122) 00:12:59.846 11439.011 - 11498.589: 51.1465% ( 130) 00:12:59.846 11498.589 - 11558.167: 52.4554% ( 129) 00:12:59.846 11558.167 - 11617.745: 53.6729% ( 120) 00:12:59.846 11617.745 - 11677.324: 55.0223% ( 133) 00:12:59.846 11677.324 - 11736.902: 56.3718% ( 133) 00:12:59.846 11736.902 - 11796.480: 57.8632% ( 147) 00:12:59.846 11796.480 - 11856.058: 59.3446% ( 146) 00:12:59.846 11856.058 - 11915.636: 60.8056% ( 144) 00:12:59.846 11915.636 - 11975.215: 62.2362% ( 141) 00:12:59.846 11975.215 - 12034.793: 63.6668% ( 141) 00:12:59.846 12034.793 - 12094.371: 64.9858% ( 130) 00:12:59.846 12094.371 - 12153.949: 66.2338% ( 123) 00:12:59.846 12153.949 - 12213.527: 67.5629% ( 131) 00:12:59.846 12213.527 - 12273.105: 68.7094% ( 113) 00:12:59.846 12273.105 - 12332.684: 69.8458% ( 112) 00:12:59.846 12332.684 - 12392.262: 70.9517% ( 109) 00:12:59.846 12392.262 - 12451.840: 71.9765% ( 101) 00:12:59.846 12451.840 - 12511.418: 72.9200% ( 93) 00:12:59.846 12511.418 - 12570.996: 73.8028% ( 87) 00:12:59.846 12570.996 - 12630.575: 74.5840% ( 77) 00:12:59.846 12630.575 - 12690.153: 75.3044% ( 71) 00:12:59.846 12690.153 - 12749.731: 75.9131% ( 60) 00:12:59.846 12749.731 - 12809.309: 76.4407% ( 52) 00:12:59.846 12809.309 - 12868.887: 76.8567% ( 41) 00:12:59.846 12868.887 - 12928.465: 77.1713% ( 31) 00:12:59.846 12928.465 - 12988.044: 77.4452% ( 27) 00:12:59.846 12988.044 - 13047.622: 77.7293% ( 28) 00:12:59.846 13047.622 - 13107.200: 77.9830% ( 25) 00:12:59.846 13107.200 - 13166.778: 78.2366% ( 25) 00:12:59.846 13166.778 - 13226.356: 78.4395% ( 20) 00:12:59.846 13226.356 - 13285.935: 78.6627% ( 22) 00:12:59.846 13285.935 - 13345.513: 78.8860% ( 22) 00:12:59.846 13345.513 - 13405.091: 79.0787% ( 19) 00:12:59.846 13405.091 - 13464.669: 79.2614% ( 18) 00:12:59.846 13464.669 - 13524.247: 79.4237% ( 16) 00:12:59.846 13524.247 - 13583.825: 79.5252% ( 10) 00:12:59.846 13583.825 - 13643.404: 79.6266% ( 10) 00:12:59.846 13643.404 - 13702.982: 79.7281% ( 10) 00:12:59.846 13702.982 - 13762.560: 79.8397% ( 11) 00:12:59.846 13762.560 - 13822.138: 79.9310% ( 9) 00:12:59.846 13822.138 - 13881.716: 80.0020% ( 7) 00:12:59.846 13881.716 - 13941.295: 80.0325% ( 3) 00:12:59.846 13941.295 - 14000.873: 80.0832% ( 5) 00:12:59.846 14000.873 - 14060.451: 80.1339% ( 5) 00:12:59.846 14060.451 - 14120.029: 80.1847% ( 5) 00:12:59.846 14120.029 - 14179.607: 80.1948% ( 1) 00:12:59.846 14179.607 - 14239.185: 80.2151% ( 2) 00:12:59.846 14239.185 - 14298.764: 80.2252% ( 1) 00:12:59.846 14298.764 - 14358.342: 80.2455% ( 2) 00:12:59.846 14358.342 - 14417.920: 80.2557% ( 1) 00:12:59.846 14417.920 - 14477.498: 80.2760% ( 2) 00:12:59.846 14477.498 - 14537.076: 80.2861% ( 1) 00:12:59.846 14537.076 - 14596.655: 80.3064% ( 2) 00:12:59.846 14596.655 - 14656.233: 80.3166% ( 1) 00:12:59.846 14656.233 - 14715.811: 80.3369% ( 2) 00:12:59.846 14715.811 - 14775.389: 80.3571% ( 2) 00:12:59.846 14775.389 - 14834.967: 80.3876% ( 3) 00:12:59.846 14834.967 - 14894.545: 80.4180% ( 3) 00:12:59.846 14894.545 - 14954.124: 80.4383% ( 2) 00:12:59.846 14954.124 - 15013.702: 80.4688% ( 3) 00:12:59.846 15013.702 - 15073.280: 80.5093% ( 4) 00:12:59.846 15073.280 - 15132.858: 80.5398% ( 3) 00:12:59.846 15132.858 - 15192.436: 80.5804% ( 4) 00:12:59.846 15192.436 - 15252.015: 80.6209% ( 4) 00:12:59.846 15252.015 - 15371.171: 80.6717% ( 5) 00:12:59.846 15371.171 - 15490.327: 80.7325% ( 6) 00:12:59.846 15490.327 - 15609.484: 80.7731% ( 4) 00:12:59.846 15609.484 - 15728.640: 80.8036% ( 3) 00:12:59.846 15728.640 - 15847.796: 80.8442% ( 4) 00:12:59.846 15847.796 - 15966.953: 80.8746% ( 3) 00:12:59.846 15966.953 - 16086.109: 80.9152% ( 4) 00:12:59.846 16086.109 - 16205.265: 80.9456% ( 3) 00:12:59.846 16205.265 - 16324.422: 80.9761% ( 3) 00:12:59.846 16324.422 - 16443.578: 81.0166% ( 4) 00:12:59.846 16443.578 - 16562.735: 81.0471% ( 3) 00:12:59.846 16562.735 - 16681.891: 81.0877% ( 4) 00:12:59.846 16681.891 - 16801.047: 81.1181% ( 3) 00:12:59.846 16801.047 - 16920.204: 81.1485% ( 3) 00:12:59.846 16920.204 - 17039.360: 81.1688% ( 2) 00:12:59.846 17396.829 - 17515.985: 81.1790% ( 1) 00:12:59.846 17515.985 - 17635.142: 81.1993% ( 2) 00:12:59.846 17635.142 - 17754.298: 81.2297% ( 3) 00:12:59.846 17754.298 - 17873.455: 81.2804% ( 5) 00:12:59.846 17873.455 - 17992.611: 81.3718% ( 9) 00:12:59.846 17992.611 - 18111.767: 81.4529% ( 8) 00:12:59.846 18111.767 - 18230.924: 81.5239% ( 7) 00:12:59.846 18230.924 - 18350.080: 81.5950% ( 7) 00:12:59.846 18350.080 - 18469.236: 81.6964% ( 10) 00:12:59.846 18469.236 - 18588.393: 81.7675% ( 7) 00:12:59.846 18588.393 - 18707.549: 81.8689% ( 10) 00:12:59.846 18707.549 - 18826.705: 81.9704% ( 10) 00:12:59.846 18826.705 - 18945.862: 82.0617% ( 9) 00:12:59.846 18945.862 - 19065.018: 82.1530% ( 9) 00:12:59.846 19065.018 - 19184.175: 82.2950% ( 14) 00:12:59.846 19184.175 - 19303.331: 82.4472% ( 15) 00:12:59.846 19303.331 - 19422.487: 82.5994% ( 15) 00:12:59.846 19422.487 - 19541.644: 82.8531% ( 25) 00:12:59.846 19541.644 - 19660.800: 83.1372% ( 28) 00:12:59.846 19660.800 - 19779.956: 83.4517% ( 31) 00:12:59.846 19779.956 - 19899.113: 83.7256% ( 27) 00:12:59.846 19899.113 - 20018.269: 84.0097% ( 28) 00:12:59.846 20018.269 - 20137.425: 84.3953% ( 38) 00:12:59.846 20137.425 - 20256.582: 84.7606% ( 36) 00:12:59.846 20256.582 - 20375.738: 85.1258% ( 36) 00:12:59.846 20375.738 - 20494.895: 85.5317% ( 40) 00:12:59.846 20494.895 - 20614.051: 85.9679% ( 43) 00:12:59.846 20614.051 - 20733.207: 86.4347% ( 46) 00:12:59.846 20733.207 - 20852.364: 86.9623% ( 52) 00:12:59.846 20852.364 - 20971.520: 87.3985% ( 43) 00:12:59.846 20971.520 - 21090.676: 87.8856% ( 48) 00:12:59.846 21090.676 - 21209.833: 88.3827% ( 49) 00:12:59.846 21209.833 - 21328.989: 88.8190% ( 43) 00:12:59.846 21328.989 - 21448.145: 89.3060% ( 48) 00:12:59.846 21448.145 - 21567.302: 89.8133% ( 50) 00:12:59.846 21567.302 - 21686.458: 90.3206% ( 50) 00:12:59.846 21686.458 - 21805.615: 90.7873% ( 46) 00:12:59.846 21805.615 - 21924.771: 91.2845% ( 49) 00:12:59.846 21924.771 - 22043.927: 91.7715% ( 48) 00:12:59.846 22043.927 - 22163.084: 92.2687% ( 49) 00:12:59.846 22163.084 - 22282.240: 92.7557% ( 48) 00:12:59.846 22282.240 - 22401.396: 93.2325% ( 47) 00:12:59.846 22401.396 - 22520.553: 93.7196% ( 48) 00:12:59.846 22520.553 - 22639.709: 94.1558% ( 43) 00:12:59.846 22639.709 - 22758.865: 94.5515% ( 39) 00:12:59.846 22758.865 - 22878.022: 94.9067% ( 35) 00:12:59.846 22878.022 - 22997.178: 95.2516% ( 34) 00:12:59.846 22997.178 - 23116.335: 95.5357% ( 28) 00:12:59.846 23116.335 - 23235.491: 95.8705% ( 33) 00:12:59.846 23235.491 - 23354.647: 96.2054% ( 33) 00:12:59.846 23354.647 - 23473.804: 96.5300% ( 32) 00:12:59.846 23473.804 - 23592.960: 96.8344% ( 30) 00:12:59.846 23592.960 - 23712.116: 97.1591% ( 32) 00:12:59.846 23712.116 - 23831.273: 97.4330% ( 27) 00:12:59.846 23831.273 - 23950.429: 97.6765% ( 24) 00:12:59.846 23950.429 - 24069.585: 97.8998% ( 22) 00:12:59.846 24069.585 - 24188.742: 98.0925% ( 19) 00:12:59.846 24188.742 - 24307.898: 98.2853% ( 19) 00:12:59.846 24307.898 - 24427.055: 98.4476% ( 16) 00:12:59.846 24427.055 - 24546.211: 98.5897% ( 14) 00:12:59.846 24546.211 - 24665.367: 98.7520% ( 16) 00:12:59.846 24665.367 - 24784.524: 98.8636% ( 11) 00:12:59.846 24784.524 - 24903.680: 98.9854% ( 12) 00:12:59.846 24903.680 - 25022.836: 99.0970% ( 11) 00:12:59.846 25022.836 - 25141.993: 99.1680% ( 7) 00:12:59.846 25141.993 - 25261.149: 99.2188% ( 5) 00:12:59.846 25261.149 - 25380.305: 99.2593% ( 4) 00:12:59.846 25380.305 - 25499.462: 99.3101% ( 5) 00:12:59.846 25499.462 - 25618.618: 99.3304% ( 2) 00:12:59.846 25618.618 - 25737.775: 99.3506% ( 2) 00:12:59.846 30504.029 - 30742.342: 99.3709% ( 2) 00:12:59.846 30742.342 - 30980.655: 99.4318% ( 6) 00:12:59.846 30980.655 - 31218.967: 99.5028% ( 7) 00:12:59.846 31218.967 - 31457.280: 99.5739% ( 7) 00:12:59.846 31457.280 - 31695.593: 99.6550% ( 8) 00:12:59.846 31695.593 - 31933.905: 99.7261% ( 7) 00:12:59.846 31933.905 - 32172.218: 99.7971% ( 7) 00:12:59.846 32172.218 - 32410.531: 99.8681% ( 7) 00:12:59.846 32410.531 - 32648.844: 99.9493% ( 8) 00:12:59.846 32648.844 - 32887.156: 100.0000% ( 5) 00:12:59.846 00:12:59.846 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:59.846 ============================================================================== 00:12:59.846 Range in us Cumulative IO count 00:12:59.846 8221.789 - 8281.367: 0.0203% ( 2) 00:12:59.846 8281.367 - 8340.945: 0.0609% ( 4) 00:12:59.846 8340.945 - 8400.524: 0.0913% ( 3) 00:12:59.846 8400.524 - 8460.102: 0.1725% ( 8) 00:12:59.846 8460.102 - 8519.680: 0.2942% ( 12) 00:12:59.846 8519.680 - 8579.258: 0.4667% ( 17) 00:12:59.846 8579.258 - 8638.836: 0.6494% ( 18) 00:12:59.846 8638.836 - 8698.415: 0.8117% ( 16) 00:12:59.846 8698.415 - 8757.993: 1.0248% ( 21) 00:12:59.846 8757.993 - 8817.571: 1.2987% ( 27) 00:12:59.846 8817.571 - 8877.149: 1.7248% ( 42) 00:12:59.846 8877.149 - 8936.727: 2.2119% ( 48) 00:12:59.846 8936.727 - 8996.305: 2.7902% ( 57) 00:12:59.846 8996.305 - 9055.884: 3.5308% ( 73) 00:12:59.846 9055.884 - 9115.462: 4.3730% ( 83) 00:12:59.846 9115.462 - 9175.040: 5.3267% ( 94) 00:12:59.846 9175.040 - 9234.618: 6.2601% ( 92) 00:12:59.846 9234.618 - 9294.196: 7.3458% ( 107) 00:12:59.846 9294.196 - 9353.775: 8.4619% ( 110) 00:12:59.846 9353.775 - 9413.353: 9.6591% ( 118) 00:12:59.846 9413.353 - 9472.931: 10.9578% ( 128) 00:12:59.846 9472.931 - 9532.509: 12.2666% ( 129) 00:12:59.846 9532.509 - 9592.087: 13.6262% ( 134) 00:12:59.846 9592.087 - 9651.665: 15.0365% ( 139) 00:12:59.846 9651.665 - 9711.244: 16.4976% ( 144) 00:12:59.846 9711.244 - 9770.822: 17.9688% ( 145) 00:12:59.846 9770.822 - 9830.400: 19.5008% ( 151) 00:12:59.846 9830.400 - 9889.978: 20.9923% ( 147) 00:12:59.846 9889.978 - 9949.556: 22.5649% ( 155) 00:12:59.846 9949.556 - 10009.135: 23.9550% ( 137) 00:12:59.846 10009.135 - 10068.713: 25.2435% ( 127) 00:12:59.846 10068.713 - 10128.291: 26.4712% ( 121) 00:12:59.846 10128.291 - 10187.869: 27.6989% ( 121) 00:12:59.846 10187.869 - 10247.447: 28.9671% ( 125) 00:12:59.846 10247.447 - 10307.025: 30.1441% ( 116) 00:12:59.846 10307.025 - 10366.604: 31.2703% ( 111) 00:12:59.846 10366.604 - 10426.182: 32.2545% ( 97) 00:12:59.846 10426.182 - 10485.760: 33.1676% ( 90) 00:12:59.846 10485.760 - 10545.338: 33.9894% ( 81) 00:12:59.846 10545.338 - 10604.916: 34.8417% ( 84) 00:12:59.846 10604.916 - 10664.495: 35.7143% ( 86) 00:12:59.846 10664.495 - 10724.073: 36.6883% ( 96) 00:12:59.846 10724.073 - 10783.651: 37.5609% ( 86) 00:12:59.846 10783.651 - 10843.229: 38.5349% ( 96) 00:12:59.846 10843.229 - 10902.807: 39.4683% ( 92) 00:12:59.846 10902.807 - 10962.385: 40.4221% ( 94) 00:12:59.846 10962.385 - 11021.964: 41.3961% ( 96) 00:12:59.846 11021.964 - 11081.542: 42.2991% ( 89) 00:12:59.846 11081.542 - 11141.120: 43.3036% ( 99) 00:12:59.846 11141.120 - 11200.698: 44.4602% ( 114) 00:12:59.846 11200.698 - 11260.276: 45.7386% ( 126) 00:12:59.846 11260.276 - 11319.855: 47.0678% ( 131) 00:12:59.846 11319.855 - 11379.433: 48.4375% ( 135) 00:12:59.846 11379.433 - 11439.011: 49.6246% ( 117) 00:12:59.846 11439.011 - 11498.589: 50.9131% ( 127) 00:12:59.846 11498.589 - 11558.167: 52.1307% ( 120) 00:12:59.846 11558.167 - 11617.745: 53.3888% ( 124) 00:12:59.846 11617.745 - 11677.324: 54.5759% ( 117) 00:12:59.846 11677.324 - 11736.902: 55.8137% ( 122) 00:12:59.846 11736.902 - 11796.480: 57.2240% ( 139) 00:12:59.846 11796.480 - 11856.058: 58.6546% ( 141) 00:12:59.846 11856.058 - 11915.636: 60.1765% ( 150) 00:12:59.846 11915.636 - 11975.215: 61.6680% ( 147) 00:12:59.846 11975.215 - 12034.793: 63.0783% ( 139) 00:12:59.846 12034.793 - 12094.371: 64.4278% ( 133) 00:12:59.846 12094.371 - 12153.949: 65.6859% ( 124) 00:12:59.846 12153.949 - 12213.527: 66.9947% ( 129) 00:12:59.846 12213.527 - 12273.105: 68.3746% ( 136) 00:12:59.846 12273.105 - 12332.684: 69.6124% ( 122) 00:12:59.846 12332.684 - 12392.262: 70.6879% ( 106) 00:12:59.846 12392.262 - 12451.840: 71.6619% ( 96) 00:12:59.846 12451.840 - 12511.418: 72.6258% ( 95) 00:12:59.846 12511.418 - 12570.996: 73.5390% ( 90) 00:12:59.846 12570.996 - 12630.575: 74.3506% ( 80) 00:12:59.846 12630.575 - 12690.153: 75.1420% ( 78) 00:12:59.846 12690.153 - 12749.731: 75.7407% ( 59) 00:12:59.846 12749.731 - 12809.309: 76.2175% ( 47) 00:12:59.846 12809.309 - 12868.887: 76.5929% ( 37) 00:12:59.846 12868.887 - 12928.465: 76.9379% ( 34) 00:12:59.846 12928.465 - 12988.044: 77.2321% ( 29) 00:12:59.846 12988.044 - 13047.622: 77.4858% ( 25) 00:12:59.846 13047.622 - 13107.200: 77.7293% ( 24) 00:12:59.846 13107.200 - 13166.778: 77.9525% ( 22) 00:12:59.846 13166.778 - 13226.356: 78.1757% ( 22) 00:12:59.846 13226.356 - 13285.935: 78.3787% ( 20) 00:12:59.846 13285.935 - 13345.513: 78.5816% ( 20) 00:12:59.846 13345.513 - 13405.091: 78.7236% ( 14) 00:12:59.846 13405.091 - 13464.669: 78.8657% ( 14) 00:12:59.846 13464.669 - 13524.247: 79.0077% ( 14) 00:12:59.846 13524.247 - 13583.825: 79.1396% ( 13) 00:12:59.846 13583.825 - 13643.404: 79.2411% ( 10) 00:12:59.846 13643.404 - 13702.982: 79.3425% ( 10) 00:12:59.846 13702.982 - 13762.560: 79.4440% ( 10) 00:12:59.846 13762.560 - 13822.138: 79.5150% ( 7) 00:12:59.846 13822.138 - 13881.716: 79.5657% ( 5) 00:12:59.846 13881.716 - 13941.295: 79.6165% ( 5) 00:12:59.846 13941.295 - 14000.873: 79.6672% ( 5) 00:12:59.846 14000.873 - 14060.451: 79.7078% ( 4) 00:12:59.846 14060.451 - 14120.029: 79.7382% ( 3) 00:12:59.846 14120.029 - 14179.607: 79.7585% ( 2) 00:12:59.846 14179.607 - 14239.185: 79.7991% ( 4) 00:12:59.846 14239.185 - 14298.764: 79.8295% ( 3) 00:12:59.846 14298.764 - 14358.342: 79.8701% ( 4) 00:12:59.846 14358.342 - 14417.920: 79.9006% ( 3) 00:12:59.846 14417.920 - 14477.498: 79.9310% ( 3) 00:12:59.847 14477.498 - 14537.076: 79.9817% ( 5) 00:12:59.847 14537.076 - 14596.655: 80.0020% ( 2) 00:12:59.847 14596.655 - 14656.233: 80.0325% ( 3) 00:12:59.847 14656.233 - 14715.811: 80.0426% ( 1) 00:12:59.847 14715.811 - 14775.389: 80.0528% ( 1) 00:12:59.847 14775.389 - 14834.967: 80.0629% ( 1) 00:12:59.847 14834.967 - 14894.545: 80.0832% ( 2) 00:12:59.847 14894.545 - 14954.124: 80.1035% ( 2) 00:12:59.847 14954.124 - 15013.702: 80.1136% ( 1) 00:12:59.847 15013.702 - 15073.280: 80.1339% ( 2) 00:12:59.847 15073.280 - 15132.858: 80.1441% ( 1) 00:12:59.847 15132.858 - 15192.436: 80.1745% ( 3) 00:12:59.847 15192.436 - 15252.015: 80.2050% ( 3) 00:12:59.847 15252.015 - 15371.171: 80.2760% ( 7) 00:12:59.847 15371.171 - 15490.327: 80.3369% ( 6) 00:12:59.847 15490.327 - 15609.484: 80.3977% ( 6) 00:12:59.847 15609.484 - 15728.640: 80.4586% ( 6) 00:12:59.847 15728.640 - 15847.796: 80.5195% ( 6) 00:12:59.847 15847.796 - 15966.953: 80.5804% ( 6) 00:12:59.847 15966.953 - 16086.109: 80.6412% ( 6) 00:12:59.847 16086.109 - 16205.265: 80.7123% ( 7) 00:12:59.847 16205.265 - 16324.422: 80.7731% ( 6) 00:12:59.847 16324.422 - 16443.578: 80.8340% ( 6) 00:12:59.847 16443.578 - 16562.735: 80.9050% ( 7) 00:12:59.847 16562.735 - 16681.891: 80.9862% ( 8) 00:12:59.847 16681.891 - 16801.047: 81.0572% ( 7) 00:12:59.847 16801.047 - 16920.204: 81.1587% ( 10) 00:12:59.847 16920.204 - 17039.360: 81.2399% ( 8) 00:12:59.847 17039.360 - 17158.516: 81.3312% ( 9) 00:12:59.847 17158.516 - 17277.673: 81.3718% ( 4) 00:12:59.847 17277.673 - 17396.829: 81.4022% ( 3) 00:12:59.847 17396.829 - 17515.985: 81.4326% ( 3) 00:12:59.847 17515.985 - 17635.142: 81.4935% ( 6) 00:12:59.847 17635.142 - 17754.298: 81.5645% ( 7) 00:12:59.847 17754.298 - 17873.455: 81.6457% ( 8) 00:12:59.847 17873.455 - 17992.611: 81.7370% ( 9) 00:12:59.847 17992.611 - 18111.767: 81.8385% ( 10) 00:12:59.847 18111.767 - 18230.924: 81.9399% ( 10) 00:12:59.847 18230.924 - 18350.080: 82.0110% ( 7) 00:12:59.847 18350.080 - 18469.236: 82.0921% ( 8) 00:12:59.847 18469.236 - 18588.393: 82.1631% ( 7) 00:12:59.847 18588.393 - 18707.549: 82.2342% ( 7) 00:12:59.847 18707.549 - 18826.705: 82.3255% ( 9) 00:12:59.847 18826.705 - 18945.862: 82.4168% ( 9) 00:12:59.847 18945.862 - 19065.018: 82.4878% ( 7) 00:12:59.847 19065.018 - 19184.175: 82.5791% ( 9) 00:12:59.847 19184.175 - 19303.331: 82.7009% ( 12) 00:12:59.847 19303.331 - 19422.487: 82.8632% ( 16) 00:12:59.847 19422.487 - 19541.644: 83.0256% ( 16) 00:12:59.847 19541.644 - 19660.800: 83.2386% ( 21) 00:12:59.847 19660.800 - 19779.956: 83.5227% ( 28) 00:12:59.847 19779.956 - 19899.113: 83.8068% ( 28) 00:12:59.847 19899.113 - 20018.269: 84.0503% ( 24) 00:12:59.847 20018.269 - 20137.425: 84.3446% ( 29) 00:12:59.847 20137.425 - 20256.582: 84.6185% ( 27) 00:12:59.847 20256.582 - 20375.738: 84.9635% ( 34) 00:12:59.847 20375.738 - 20494.895: 85.3287% ( 36) 00:12:59.847 20494.895 - 20614.051: 85.6838% ( 35) 00:12:59.847 20614.051 - 20733.207: 86.1100% ( 42) 00:12:59.847 20733.207 - 20852.364: 86.5463% ( 43) 00:12:59.847 20852.364 - 20971.520: 87.0028% ( 45) 00:12:59.847 20971.520 - 21090.676: 87.4493% ( 44) 00:12:59.847 21090.676 - 21209.833: 87.9566% ( 50) 00:12:59.847 21209.833 - 21328.989: 88.4943% ( 53) 00:12:59.847 21328.989 - 21448.145: 89.0321% ( 53) 00:12:59.847 21448.145 - 21567.302: 89.5698% ( 53) 00:12:59.847 21567.302 - 21686.458: 90.0670% ( 49) 00:12:59.847 21686.458 - 21805.615: 90.5844% ( 51) 00:12:59.847 21805.615 - 21924.771: 91.1830% ( 59) 00:12:59.847 21924.771 - 22043.927: 91.7309% ( 54) 00:12:59.847 22043.927 - 22163.084: 92.2687% ( 53) 00:12:59.847 22163.084 - 22282.240: 92.7760% ( 50) 00:12:59.847 22282.240 - 22401.396: 93.3949% ( 61) 00:12:59.847 22401.396 - 22520.553: 93.9326% ( 53) 00:12:59.847 22520.553 - 22639.709: 94.4501% ( 51) 00:12:59.847 22639.709 - 22758.865: 94.9675% ( 51) 00:12:59.847 22758.865 - 22878.022: 95.4343% ( 46) 00:12:59.847 22878.022 - 22997.178: 95.8502% ( 41) 00:12:59.847 22997.178 - 23116.335: 96.2561% ( 40) 00:12:59.847 23116.335 - 23235.491: 96.5909% ( 33) 00:12:59.847 23235.491 - 23354.647: 96.9257% ( 33) 00:12:59.847 23354.647 - 23473.804: 97.2808% ( 35) 00:12:59.847 23473.804 - 23592.960: 97.5852% ( 30) 00:12:59.847 23592.960 - 23712.116: 97.9099% ( 32) 00:12:59.847 23712.116 - 23831.273: 98.1940% ( 28) 00:12:59.847 23831.273 - 23950.429: 98.3868% ( 19) 00:12:59.847 23950.429 - 24069.585: 98.5897% ( 20) 00:12:59.847 24069.585 - 24188.742: 98.7520% ( 16) 00:12:59.847 24188.742 - 24307.898: 98.8738% ( 12) 00:12:59.847 24307.898 - 24427.055: 98.9955% ( 12) 00:12:59.847 24427.055 - 24546.211: 99.0970% ( 10) 00:12:59.847 24546.211 - 24665.367: 99.1579% ( 6) 00:12:59.847 24665.367 - 24784.524: 99.1985% ( 4) 00:12:59.847 24784.524 - 24903.680: 99.2188% ( 2) 00:12:59.847 24903.680 - 25022.836: 99.2289% ( 1) 00:12:59.847 25022.836 - 25141.993: 99.2593% ( 3) 00:12:59.847 25141.993 - 25261.149: 99.2796% ( 2) 00:12:59.847 25261.149 - 25380.305: 99.2999% ( 2) 00:12:59.847 25380.305 - 25499.462: 99.3202% ( 2) 00:12:59.847 25499.462 - 25618.618: 99.3405% ( 2) 00:12:59.847 25618.618 - 25737.775: 99.3506% ( 1) 00:12:59.847 28597.527 - 28716.684: 99.3709% ( 2) 00:12:59.847 28716.684 - 28835.840: 99.4014% ( 3) 00:12:59.847 28835.840 - 28954.996: 99.4420% ( 4) 00:12:59.847 28954.996 - 29074.153: 99.4825% ( 4) 00:12:59.847 29074.153 - 29193.309: 99.5130% ( 3) 00:12:59.847 29193.309 - 29312.465: 99.5536% ( 4) 00:12:59.847 29312.465 - 29431.622: 99.5840% ( 3) 00:12:59.847 29431.622 - 29550.778: 99.6246% ( 4) 00:12:59.847 29550.778 - 29669.935: 99.6652% ( 4) 00:12:59.847 29669.935 - 29789.091: 99.6956% ( 3) 00:12:59.847 29789.091 - 29908.247: 99.7362% ( 4) 00:12:59.847 29908.247 - 30027.404: 99.7666% ( 3) 00:12:59.847 30027.404 - 30146.560: 99.8072% ( 4) 00:12:59.847 30146.560 - 30265.716: 99.8478% ( 4) 00:12:59.847 30265.716 - 30384.873: 99.8782% ( 3) 00:12:59.847 30384.873 - 30504.029: 99.9188% ( 4) 00:12:59.847 30504.029 - 30742.342: 99.9797% ( 6) 00:12:59.847 30742.342 - 30980.655: 100.0000% ( 2) 00:12:59.847 00:12:59.847 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:59.847 ============================================================================== 00:12:59.847 Range in us Cumulative IO count 00:12:59.847 8162.211 - 8221.789: 0.0101% ( 1) 00:12:59.847 8221.789 - 8281.367: 0.0304% ( 2) 00:12:59.847 8281.367 - 8340.945: 0.0710% ( 4) 00:12:59.847 8340.945 - 8400.524: 0.1420% ( 7) 00:12:59.847 8400.524 - 8460.102: 0.2841% ( 14) 00:12:59.847 8460.102 - 8519.680: 0.4058% ( 12) 00:12:59.847 8519.680 - 8579.258: 0.5479% ( 14) 00:12:59.847 8579.258 - 8638.836: 0.7711% ( 22) 00:12:59.847 8638.836 - 8698.415: 0.9334% ( 16) 00:12:59.847 8698.415 - 8757.993: 1.1059% ( 17) 00:12:59.847 8757.993 - 8817.571: 1.3799% ( 27) 00:12:59.847 8817.571 - 8877.149: 1.7553% ( 37) 00:12:59.847 8877.149 - 8936.727: 2.2829% ( 52) 00:12:59.847 8936.727 - 8996.305: 2.8105% ( 52) 00:12:59.847 8996.305 - 9055.884: 3.5308% ( 71) 00:12:59.847 9055.884 - 9115.462: 4.3324% ( 79) 00:12:59.847 9115.462 - 9175.040: 5.1745% ( 83) 00:12:59.847 9175.040 - 9234.618: 6.0877% ( 90) 00:12:59.847 9234.618 - 9294.196: 7.1429% ( 104) 00:12:59.847 9294.196 - 9353.775: 8.3097% ( 115) 00:12:59.847 9353.775 - 9413.353: 9.5779% ( 125) 00:12:59.847 9413.353 - 9472.931: 10.8563% ( 126) 00:12:59.847 9472.931 - 9532.509: 12.2768% ( 140) 00:12:59.847 9532.509 - 9592.087: 13.6364% ( 134) 00:12:59.847 9592.087 - 9651.665: 15.0162% ( 136) 00:12:59.847 9651.665 - 9711.244: 16.4367% ( 140) 00:12:59.847 9711.244 - 9770.822: 17.9586% ( 150) 00:12:59.847 9770.822 - 9830.400: 19.4602% ( 148) 00:12:59.847 9830.400 - 9889.978: 20.9821% ( 150) 00:12:59.847 9889.978 - 9949.556: 22.4229% ( 142) 00:12:59.847 9949.556 - 10009.135: 23.8941% ( 145) 00:12:59.847 10009.135 - 10068.713: 25.2841% ( 137) 00:12:59.847 10068.713 - 10128.291: 26.6640% ( 136) 00:12:59.847 10128.291 - 10187.869: 28.0438% ( 136) 00:12:59.847 10187.869 - 10247.447: 29.3222% ( 126) 00:12:59.847 10247.447 - 10307.025: 30.5398% ( 120) 00:12:59.847 10307.025 - 10366.604: 31.5645% ( 101) 00:12:59.847 10366.604 - 10426.182: 32.5893% ( 101) 00:12:59.847 10426.182 - 10485.760: 33.5227% ( 92) 00:12:59.847 10485.760 - 10545.338: 34.3446% ( 81) 00:12:59.847 10545.338 - 10604.916: 35.1968% ( 84) 00:12:59.847 10604.916 - 10664.495: 36.1100% ( 90) 00:12:59.847 10664.495 - 10724.073: 36.9420% ( 82) 00:12:59.847 10724.073 - 10783.651: 37.8551% ( 90) 00:12:59.847 10783.651 - 10843.229: 38.7480% ( 88) 00:12:59.847 10843.229 - 10902.807: 39.6205% ( 86) 00:12:59.847 10902.807 - 10962.385: 40.5134% ( 88) 00:12:59.847 10962.385 - 11021.964: 41.4265% ( 90) 00:12:59.848 11021.964 - 11081.542: 42.4107% ( 97) 00:12:59.848 11081.542 - 11141.120: 43.3239% ( 90) 00:12:59.848 11141.120 - 11200.698: 44.4298% ( 109) 00:12:59.848 11200.698 - 11260.276: 45.5256% ( 108) 00:12:59.848 11260.276 - 11319.855: 46.8243% ( 128) 00:12:59.848 11319.855 - 11379.433: 48.1636% ( 132) 00:12:59.848 11379.433 - 11439.011: 49.4420% ( 126) 00:12:59.848 11439.011 - 11498.589: 50.7001% ( 124) 00:12:59.848 11498.589 - 11558.167: 51.9278% ( 121) 00:12:59.848 11558.167 - 11617.745: 53.2062% ( 126) 00:12:59.848 11617.745 - 11677.324: 54.4643% ( 124) 00:12:59.848 11677.324 - 11736.902: 55.6920% ( 121) 00:12:59.848 11736.902 - 11796.480: 57.1023% ( 139) 00:12:59.848 11796.480 - 11856.058: 58.6445% ( 152) 00:12:59.848 11856.058 - 11915.636: 60.1562% ( 149) 00:12:59.848 11915.636 - 11975.215: 61.6579% ( 148) 00:12:59.848 11975.215 - 12034.793: 63.0479% ( 137) 00:12:59.848 12034.793 - 12094.371: 64.3872% ( 132) 00:12:59.848 12094.371 - 12153.949: 65.7062% ( 130) 00:12:59.848 12153.949 - 12213.527: 66.9846% ( 126) 00:12:59.848 12213.527 - 12273.105: 68.3340% ( 133) 00:12:59.848 12273.105 - 12332.684: 69.6733% ( 132) 00:12:59.848 12332.684 - 12392.262: 70.8198% ( 113) 00:12:59.848 12392.262 - 12451.840: 71.8649% ( 103) 00:12:59.848 12451.840 - 12511.418: 72.8389% ( 96) 00:12:59.848 12511.418 - 12570.996: 73.6201% ( 77) 00:12:59.848 12570.996 - 12630.575: 74.3709% ( 74) 00:12:59.848 12630.575 - 12690.153: 75.0609% ( 68) 00:12:59.848 12690.153 - 12749.731: 75.7204% ( 65) 00:12:59.848 12749.731 - 12809.309: 76.1465% ( 42) 00:12:59.848 12809.309 - 12868.887: 76.4407% ( 29) 00:12:59.848 12868.887 - 12928.465: 76.7350% ( 29) 00:12:59.848 12928.465 - 12988.044: 77.0089% ( 27) 00:12:59.848 12988.044 - 13047.622: 77.2423% ( 23) 00:12:59.848 13047.622 - 13107.200: 77.4756% ( 23) 00:12:59.848 13107.200 - 13166.778: 77.6684% ( 19) 00:12:59.848 13166.778 - 13226.356: 77.8511% ( 18) 00:12:59.848 13226.356 - 13285.935: 78.0540% ( 20) 00:12:59.848 13285.935 - 13345.513: 78.2366% ( 18) 00:12:59.848 13345.513 - 13405.091: 78.4091% ( 17) 00:12:59.848 13405.091 - 13464.669: 78.5613% ( 15) 00:12:59.848 13464.669 - 13524.247: 78.6627% ( 10) 00:12:59.848 13524.247 - 13583.825: 78.7845% ( 12) 00:12:59.848 13583.825 - 13643.404: 78.8860% ( 10) 00:12:59.848 13643.404 - 13702.982: 79.0179% ( 13) 00:12:59.848 13702.982 - 13762.560: 79.1193% ( 10) 00:12:59.848 13762.560 - 13822.138: 79.2106% ( 9) 00:12:59.848 13822.138 - 13881.716: 79.2918% ( 8) 00:12:59.848 13881.716 - 13941.295: 79.3527% ( 6) 00:12:59.848 13941.295 - 14000.873: 79.4034% ( 5) 00:12:59.848 14000.873 - 14060.451: 79.4541% ( 5) 00:12:59.848 14060.451 - 14120.029: 79.4846% ( 3) 00:12:59.848 14120.029 - 14179.607: 79.4947% ( 1) 00:12:59.848 14179.607 - 14239.185: 79.5150% ( 2) 00:12:59.848 14239.185 - 14298.764: 79.5252% ( 1) 00:12:59.848 14298.764 - 14358.342: 79.5455% ( 2) 00:12:59.848 14358.342 - 14417.920: 79.5657% ( 2) 00:12:59.848 14417.920 - 14477.498: 79.5860% ( 2) 00:12:59.848 14477.498 - 14537.076: 79.5962% ( 1) 00:12:59.848 14537.076 - 14596.655: 79.6165% ( 2) 00:12:59.848 14596.655 - 14656.233: 79.6368% ( 2) 00:12:59.848 14656.233 - 14715.811: 79.6571% ( 2) 00:12:59.848 14715.811 - 14775.389: 79.6875% ( 3) 00:12:59.848 14775.389 - 14834.967: 79.7281% ( 4) 00:12:59.848 14834.967 - 14894.545: 79.7484% ( 2) 00:12:59.848 14894.545 - 14954.124: 79.7788% ( 3) 00:12:59.848 14954.124 - 15013.702: 79.8093% ( 3) 00:12:59.848 15013.702 - 15073.280: 79.8498% ( 4) 00:12:59.848 15073.280 - 15132.858: 79.8701% ( 2) 00:12:59.848 15252.015 - 15371.171: 79.8904% ( 2) 00:12:59.848 15371.171 - 15490.327: 79.9209% ( 3) 00:12:59.848 15490.327 - 15609.484: 79.9513% ( 3) 00:12:59.848 15609.484 - 15728.640: 79.9817% ( 3) 00:12:59.848 15728.640 - 15847.796: 80.0122% ( 3) 00:12:59.848 15847.796 - 15966.953: 80.1035% ( 9) 00:12:59.848 15966.953 - 16086.109: 80.2354% ( 13) 00:12:59.848 16086.109 - 16205.265: 80.3470% ( 11) 00:12:59.848 16205.265 - 16324.422: 80.4485% ( 10) 00:12:59.848 16324.422 - 16443.578: 80.5804% ( 13) 00:12:59.848 16443.578 - 16562.735: 80.7427% ( 16) 00:12:59.848 16562.735 - 16681.891: 80.8746% ( 13) 00:12:59.848 16681.891 - 16801.047: 80.9659% ( 9) 00:12:59.848 16801.047 - 16920.204: 81.1080% ( 14) 00:12:59.848 16920.204 - 17039.360: 81.2297% ( 12) 00:12:59.848 17039.360 - 17158.516: 81.3515% ( 12) 00:12:59.848 17158.516 - 17277.673: 81.4631% ( 11) 00:12:59.848 17277.673 - 17396.829: 81.5645% ( 10) 00:12:59.848 17396.829 - 17515.985: 81.6558% ( 9) 00:12:59.848 17515.985 - 17635.142: 81.7269% ( 7) 00:12:59.848 17635.142 - 17754.298: 81.7877% ( 6) 00:12:59.848 17754.298 - 17873.455: 81.8689% ( 8) 00:12:59.848 17873.455 - 17992.611: 81.9298% ( 6) 00:12:59.848 17992.611 - 18111.767: 81.9907% ( 6) 00:12:59.848 18111.767 - 18230.924: 82.0414% ( 5) 00:12:59.848 18230.924 - 18350.080: 82.0718% ( 3) 00:12:59.848 18350.080 - 18469.236: 82.1226% ( 5) 00:12:59.848 18469.236 - 18588.393: 82.1834% ( 6) 00:12:59.848 18588.393 - 18707.549: 82.2240% ( 4) 00:12:59.848 18707.549 - 18826.705: 82.2646% ( 4) 00:12:59.848 18826.705 - 18945.862: 82.3255% ( 6) 00:12:59.848 18945.862 - 19065.018: 82.3965% ( 7) 00:12:59.848 19065.018 - 19184.175: 82.5081% ( 11) 00:12:59.848 19184.175 - 19303.331: 82.6705% ( 16) 00:12:59.848 19303.331 - 19422.487: 82.8226% ( 15) 00:12:59.848 19422.487 - 19541.644: 83.0357% ( 21) 00:12:59.848 19541.644 - 19660.800: 83.2792% ( 24) 00:12:59.848 19660.800 - 19779.956: 83.5633% ( 28) 00:12:59.848 19779.956 - 19899.113: 83.8373% ( 27) 00:12:59.848 19899.113 - 20018.269: 84.1518% ( 31) 00:12:59.848 20018.269 - 20137.425: 84.4460% ( 29) 00:12:59.848 20137.425 - 20256.582: 84.7808% ( 33) 00:12:59.848 20256.582 - 20375.738: 85.1562% ( 37) 00:12:59.848 20375.738 - 20494.895: 85.5824% ( 42) 00:12:59.848 20494.895 - 20614.051: 85.9578% ( 37) 00:12:59.848 20614.051 - 20733.207: 86.4245% ( 46) 00:12:59.848 20733.207 - 20852.364: 86.8912% ( 46) 00:12:59.848 20852.364 - 20971.520: 87.4087% ( 51) 00:12:59.848 20971.520 - 21090.676: 87.9464% ( 53) 00:12:59.848 21090.676 - 21209.833: 88.4334% ( 48) 00:12:59.848 21209.833 - 21328.989: 88.9306% ( 49) 00:12:59.848 21328.989 - 21448.145: 89.4683% ( 53) 00:12:59.848 21448.145 - 21567.302: 89.9858% ( 51) 00:12:59.848 21567.302 - 21686.458: 90.5032% ( 51) 00:12:59.848 21686.458 - 21805.615: 91.0106% ( 50) 00:12:59.848 21805.615 - 21924.771: 91.5179% ( 50) 00:12:59.848 21924.771 - 22043.927: 92.0455% ( 52) 00:12:59.848 22043.927 - 22163.084: 92.5832% ( 53) 00:12:59.848 22163.084 - 22282.240: 93.1209% ( 53) 00:12:59.848 22282.240 - 22401.396: 93.6790% ( 55) 00:12:59.848 22401.396 - 22520.553: 94.1558% ( 47) 00:12:59.848 22520.553 - 22639.709: 94.6631% ( 50) 00:12:59.848 22639.709 - 22758.865: 95.1400% ( 47) 00:12:59.848 22758.865 - 22878.022: 95.5966% ( 45) 00:12:59.848 22878.022 - 22997.178: 96.0430% ( 44) 00:12:59.848 22997.178 - 23116.335: 96.4489% ( 40) 00:12:59.848 23116.335 - 23235.491: 96.7938% ( 34) 00:12:59.848 23235.491 - 23354.647: 97.1185% ( 32) 00:12:59.848 23354.647 - 23473.804: 97.4432% ( 32) 00:12:59.848 23473.804 - 23592.960: 97.7070% ( 26) 00:12:59.848 23592.960 - 23712.116: 97.9708% ( 26) 00:12:59.848 23712.116 - 23831.273: 98.1940% ( 22) 00:12:59.848 23831.273 - 23950.429: 98.4071% ( 21) 00:12:59.848 23950.429 - 24069.585: 98.5897% ( 18) 00:12:59.848 24069.585 - 24188.742: 98.7216% ( 13) 00:12:59.848 24188.742 - 24307.898: 98.8433% ( 12) 00:12:59.848 24307.898 - 24427.055: 98.9651% ( 12) 00:12:59.848 24427.055 - 24546.211: 99.0361% ( 7) 00:12:59.848 24546.211 - 24665.367: 99.0869% ( 5) 00:12:59.848 24665.367 - 24784.524: 99.1376% ( 5) 00:12:59.848 24784.524 - 24903.680: 99.1883% ( 5) 00:12:59.848 24903.680 - 25022.836: 99.2390% ( 5) 00:12:59.848 25022.836 - 25141.993: 99.2796% ( 4) 00:12:59.848 25141.993 - 25261.149: 99.2999% ( 2) 00:12:59.848 25261.149 - 25380.305: 99.3202% ( 2) 00:12:59.848 25380.305 - 25499.462: 99.3304% ( 1) 00:12:59.848 25499.462 - 25618.618: 99.3506% ( 2) 00:12:59.848 26691.025 - 26810.182: 99.3811% ( 3) 00:12:59.848 26810.182 - 26929.338: 99.4115% ( 3) 00:12:59.848 26929.338 - 27048.495: 99.4521% ( 4) 00:12:59.848 27048.495 - 27167.651: 99.4927% ( 4) 00:12:59.848 27167.651 - 27286.807: 99.5231% ( 3) 00:12:59.848 27286.807 - 27405.964: 99.5637% ( 4) 00:12:59.848 27405.964 - 27525.120: 99.6043% ( 4) 00:12:59.848 27525.120 - 27644.276: 99.6347% ( 3) 00:12:59.848 27644.276 - 27763.433: 99.6753% ( 4) 00:12:59.848 27763.433 - 27882.589: 99.7058% ( 3) 00:12:59.848 27882.589 - 28001.745: 99.7463% ( 4) 00:12:59.848 28001.745 - 28120.902: 99.7869% ( 4) 00:12:59.848 28120.902 - 28240.058: 99.8275% ( 4) 00:12:59.848 28240.058 - 28359.215: 99.8580% ( 3) 00:12:59.848 28359.215 - 28478.371: 99.8985% ( 4) 00:12:59.848 28478.371 - 28597.527: 99.9391% ( 4) 00:12:59.848 28597.527 - 28716.684: 99.9797% ( 4) 00:12:59.848 28716.684 - 28835.840: 100.0000% ( 2) 00:12:59.848 00:12:59.848 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:59.848 ============================================================================== 00:12:59.848 Range in us Cumulative IO count 00:12:59.848 8162.211 - 8221.789: 0.0101% ( 1) 00:12:59.848 8221.789 - 8281.367: 0.0304% ( 2) 00:12:59.848 8281.367 - 8340.945: 0.0609% ( 3) 00:12:59.848 8340.945 - 8400.524: 0.1116% ( 5) 00:12:59.848 8400.524 - 8460.102: 0.1725% ( 6) 00:12:59.848 8460.102 - 8519.680: 0.3044% ( 13) 00:12:59.848 8519.680 - 8579.258: 0.4870% ( 18) 00:12:59.848 8579.258 - 8638.836: 0.6899% ( 20) 00:12:59.848 8638.836 - 8698.415: 0.8624% ( 17) 00:12:59.848 8698.415 - 8757.993: 1.0450% ( 18) 00:12:59.848 8757.993 - 8817.571: 1.2683% ( 22) 00:12:59.848 8817.571 - 8877.149: 1.6741% ( 40) 00:12:59.848 8877.149 - 8936.727: 2.1510% ( 47) 00:12:59.848 8936.727 - 8996.305: 2.7597% ( 60) 00:12:59.848 8996.305 - 9055.884: 3.5004% ( 73) 00:12:59.848 9055.884 - 9115.462: 4.3730% ( 86) 00:12:59.848 9115.462 - 9175.040: 5.2658% ( 88) 00:12:59.848 9175.040 - 9234.618: 6.2196% ( 94) 00:12:59.848 9234.618 - 9294.196: 7.2342% ( 100) 00:12:59.848 9294.196 - 9353.775: 8.3502% ( 110) 00:12:59.848 9353.775 - 9413.353: 9.5069% ( 114) 00:12:59.848 9413.353 - 9472.931: 10.8056% ( 128) 00:12:59.848 9472.931 - 9532.509: 12.0739% ( 125) 00:12:59.848 9532.509 - 9592.087: 13.3827% ( 129) 00:12:59.848 9592.087 - 9651.665: 14.8235% ( 142) 00:12:59.848 9651.665 - 9711.244: 16.2338% ( 139) 00:12:59.848 9711.244 - 9770.822: 17.7658% ( 151) 00:12:59.848 9770.822 - 9830.400: 19.2573% ( 147) 00:12:59.848 9830.400 - 9889.978: 20.7589% ( 148) 00:12:59.848 9889.978 - 9949.556: 22.2098% ( 143) 00:12:59.848 9949.556 - 10009.135: 23.6100% ( 138) 00:12:59.848 10009.135 - 10068.713: 25.0710% ( 144) 00:12:59.848 10068.713 - 10128.291: 26.4002% ( 131) 00:12:59.848 10128.291 - 10187.869: 27.7597% ( 134) 00:12:59.848 10187.869 - 10247.447: 29.0584% ( 128) 00:12:59.848 10247.447 - 10307.025: 30.2658% ( 119) 00:12:59.848 10307.025 - 10366.604: 31.4428% ( 116) 00:12:59.848 10366.604 - 10426.182: 32.5284% ( 107) 00:12:59.848 10426.182 - 10485.760: 33.5329% ( 99) 00:12:59.848 10485.760 - 10545.338: 34.4968% ( 95) 00:12:59.848 10545.338 - 10604.916: 35.4200% ( 91) 00:12:59.848 10604.916 - 10664.495: 36.2419% ( 81) 00:12:59.848 10664.495 - 10724.073: 37.0840% ( 83) 00:12:59.848 10724.073 - 10783.651: 37.9160% ( 82) 00:12:59.848 10783.651 - 10843.229: 38.8393% ( 91) 00:12:59.848 10843.229 - 10902.807: 39.7626% ( 91) 00:12:59.848 10902.807 - 10962.385: 40.7265% ( 95) 00:12:59.848 10962.385 - 11021.964: 41.7817% ( 104) 00:12:59.848 11021.964 - 11081.542: 42.8166% ( 102) 00:12:59.848 11081.542 - 11141.120: 43.9123% ( 108) 00:12:59.848 11141.120 - 11200.698: 44.9777% ( 105) 00:12:59.848 11200.698 - 11260.276: 46.0938% ( 110) 00:12:59.848 11260.276 - 11319.855: 47.2504% ( 114) 00:12:59.848 11319.855 - 11379.433: 48.5998% ( 133) 00:12:59.848 11379.433 - 11439.011: 49.8174% ( 120) 00:12:59.848 11439.011 - 11498.589: 51.0958% ( 126) 00:12:59.848 11498.589 - 11558.167: 52.3742% ( 126) 00:12:59.848 11558.167 - 11617.745: 53.5816% ( 119) 00:12:59.848 11617.745 - 11677.324: 54.8803% ( 128) 00:12:59.848 11677.324 - 11736.902: 56.2601% ( 136) 00:12:59.848 11736.902 - 11796.480: 57.6502% ( 137) 00:12:59.848 11796.480 - 11856.058: 59.0909% ( 142) 00:12:59.848 11856.058 - 11915.636: 60.5114% ( 140) 00:12:59.848 11915.636 - 11975.215: 61.9115% ( 138) 00:12:59.848 11975.215 - 12034.793: 63.3827% ( 145) 00:12:59.848 12034.793 - 12094.371: 64.7626% ( 136) 00:12:59.848 12094.371 - 12153.949: 66.0714% ( 129) 00:12:59.848 12153.949 - 12213.527: 67.3093% ( 122) 00:12:59.848 12213.527 - 12273.105: 68.5471% ( 122) 00:12:59.848 12273.105 - 12332.684: 69.6631% ( 110) 00:12:59.848 12332.684 - 12392.262: 70.7386% ( 106) 00:12:59.848 12392.262 - 12451.840: 71.7837% ( 103) 00:12:59.848 12451.840 - 12511.418: 72.6867% ( 89) 00:12:59.848 12511.418 - 12570.996: 73.5491% ( 85) 00:12:59.848 12570.996 - 12630.575: 74.3101% ( 75) 00:12:59.848 12630.575 - 12690.153: 75.0406% ( 72) 00:12:59.848 12690.153 - 12749.731: 75.6189% ( 57) 00:12:59.848 12749.731 - 12809.309: 76.1668% ( 54) 00:12:59.848 12809.309 - 12868.887: 76.5422% ( 37) 00:12:59.848 12868.887 - 12928.465: 76.7959% ( 25) 00:12:59.848 12928.465 - 12988.044: 76.9785% ( 18) 00:12:59.848 12988.044 - 13047.622: 77.1408% ( 16) 00:12:59.848 13047.622 - 13107.200: 77.2829% ( 14) 00:12:59.848 13107.200 - 13166.778: 77.4351% ( 15) 00:12:59.848 13166.778 - 13226.356: 77.6075% ( 17) 00:12:59.848 13226.356 - 13285.935: 77.7699% ( 16) 00:12:59.848 13285.935 - 13345.513: 77.9322% ( 16) 00:12:59.848 13345.513 - 13405.091: 78.0743% ( 14) 00:12:59.848 13405.091 - 13464.669: 78.2062% ( 13) 00:12:59.848 13464.669 - 13524.247: 78.3482% ( 14) 00:12:59.848 13524.247 - 13583.825: 78.4801% ( 13) 00:12:59.848 13583.825 - 13643.404: 78.6019% ( 12) 00:12:59.848 13643.404 - 13702.982: 78.7338% ( 13) 00:12:59.848 13702.982 - 13762.560: 78.8251% ( 9) 00:12:59.848 13762.560 - 13822.138: 78.8961% ( 7) 00:12:59.848 13822.138 - 13881.716: 79.0179% ( 12) 00:12:59.848 13881.716 - 13941.295: 79.0889% ( 7) 00:12:59.848 13941.295 - 14000.873: 79.1700% ( 8) 00:12:59.848 14000.873 - 14060.451: 79.2411% ( 7) 00:12:59.848 14060.451 - 14120.029: 79.3121% ( 7) 00:12:59.848 14120.029 - 14179.607: 79.3425% ( 3) 00:12:59.848 14179.607 - 14239.185: 79.3831% ( 4) 00:12:59.848 14239.185 - 14298.764: 79.4034% ( 2) 00:12:59.848 14298.764 - 14358.342: 79.4440% ( 4) 00:12:59.848 14358.342 - 14417.920: 79.4744% ( 3) 00:12:59.849 14417.920 - 14477.498: 79.5049% ( 3) 00:12:59.849 14477.498 - 14537.076: 79.5353% ( 3) 00:12:59.849 14537.076 - 14596.655: 79.5556% ( 2) 00:12:59.849 14596.655 - 14656.233: 79.5962% ( 4) 00:12:59.849 14656.233 - 14715.811: 79.6266% ( 3) 00:12:59.849 14715.811 - 14775.389: 79.6672% ( 4) 00:12:59.849 14775.389 - 14834.967: 79.7078% ( 4) 00:12:59.849 14834.967 - 14894.545: 79.7281% ( 2) 00:12:59.849 14894.545 - 14954.124: 79.7484% ( 2) 00:12:59.849 14954.124 - 15013.702: 79.7585% ( 1) 00:12:59.849 15013.702 - 15073.280: 79.7788% ( 2) 00:12:59.849 15073.280 - 15132.858: 79.7991% ( 2) 00:12:59.849 15132.858 - 15192.436: 79.8194% ( 2) 00:12:59.849 15192.436 - 15252.015: 79.8295% ( 1) 00:12:59.849 15252.015 - 15371.171: 79.9006% ( 7) 00:12:59.849 15371.171 - 15490.327: 79.9716% ( 7) 00:12:59.849 15490.327 - 15609.484: 80.0426% ( 7) 00:12:59.849 15609.484 - 15728.640: 80.1136% ( 7) 00:12:59.849 15728.640 - 15847.796: 80.1847% ( 7) 00:12:59.849 15847.796 - 15966.953: 80.2151% ( 3) 00:12:59.849 15966.953 - 16086.109: 80.2455% ( 3) 00:12:59.849 16086.109 - 16205.265: 80.2760% ( 3) 00:12:59.849 16205.265 - 16324.422: 80.3369% ( 6) 00:12:59.849 16324.422 - 16443.578: 80.3977% ( 6) 00:12:59.849 16443.578 - 16562.735: 80.4992% ( 10) 00:12:59.849 16562.735 - 16681.891: 80.6311% ( 13) 00:12:59.849 16681.891 - 16801.047: 80.7427% ( 11) 00:12:59.849 16801.047 - 16920.204: 80.8949% ( 15) 00:12:59.849 16920.204 - 17039.360: 81.0065% ( 11) 00:12:59.849 17039.360 - 17158.516: 81.0775% ( 7) 00:12:59.849 17158.516 - 17277.673: 81.1891% ( 11) 00:12:59.849 17277.673 - 17396.829: 81.3007% ( 11) 00:12:59.849 17396.829 - 17515.985: 81.4732% ( 17) 00:12:59.849 17515.985 - 17635.142: 81.6051% ( 13) 00:12:59.849 17635.142 - 17754.298: 81.7167% ( 11) 00:12:59.849 17754.298 - 17873.455: 81.8588% ( 14) 00:12:59.849 17873.455 - 17992.611: 81.9907% ( 13) 00:12:59.849 17992.611 - 18111.767: 82.1124% ( 12) 00:12:59.849 18111.767 - 18230.924: 82.2240% ( 11) 00:12:59.849 18230.924 - 18350.080: 82.3153% ( 9) 00:12:59.849 18350.080 - 18469.236: 82.4168% ( 10) 00:12:59.849 18469.236 - 18588.393: 82.5284% ( 11) 00:12:59.849 18588.393 - 18707.549: 82.6400% ( 11) 00:12:59.849 18707.549 - 18826.705: 82.7313% ( 9) 00:12:59.849 18826.705 - 18945.862: 82.8632% ( 13) 00:12:59.849 18945.862 - 19065.018: 82.9748% ( 11) 00:12:59.849 19065.018 - 19184.175: 83.0662% ( 9) 00:12:59.849 19184.175 - 19303.331: 83.1778% ( 11) 00:12:59.849 19303.331 - 19422.487: 83.2995% ( 12) 00:12:59.849 19422.487 - 19541.644: 83.4314% ( 13) 00:12:59.849 19541.644 - 19660.800: 83.5735% ( 14) 00:12:59.849 19660.800 - 19779.956: 83.7459% ( 17) 00:12:59.849 19779.956 - 19899.113: 83.9286% ( 18) 00:12:59.849 19899.113 - 20018.269: 84.2025% ( 27) 00:12:59.849 20018.269 - 20137.425: 84.4460% ( 24) 00:12:59.849 20137.425 - 20256.582: 84.7808% ( 33) 00:12:59.849 20256.582 - 20375.738: 85.1765% ( 39) 00:12:59.849 20375.738 - 20494.895: 85.5519% ( 37) 00:12:59.849 20494.895 - 20614.051: 85.9578% ( 40) 00:12:59.849 20614.051 - 20733.207: 86.4144% ( 45) 00:12:59.849 20733.207 - 20852.364: 86.9420% ( 52) 00:12:59.849 20852.364 - 20971.520: 87.5000% ( 55) 00:12:59.849 20971.520 - 21090.676: 88.0479% ( 54) 00:12:59.849 21090.676 - 21209.833: 88.5856% ( 53) 00:12:59.849 21209.833 - 21328.989: 89.1538% ( 56) 00:12:59.849 21328.989 - 21448.145: 89.7017% ( 54) 00:12:59.849 21448.145 - 21567.302: 90.2597% ( 55) 00:12:59.849 21567.302 - 21686.458: 90.7265% ( 46) 00:12:59.849 21686.458 - 21805.615: 91.2845% ( 55) 00:12:59.849 21805.615 - 21924.771: 91.8121% ( 52) 00:12:59.849 21924.771 - 22043.927: 92.3295% ( 51) 00:12:59.849 22043.927 - 22163.084: 92.8876% ( 55) 00:12:59.849 22163.084 - 22282.240: 93.4152% ( 52) 00:12:59.849 22282.240 - 22401.396: 93.9326% ( 51) 00:12:59.849 22401.396 - 22520.553: 94.4704% ( 53) 00:12:59.849 22520.553 - 22639.709: 94.8965% ( 42) 00:12:59.849 22639.709 - 22758.865: 95.3328% ( 43) 00:12:59.849 22758.865 - 22878.022: 95.7285% ( 39) 00:12:59.849 22878.022 - 22997.178: 96.1445% ( 41) 00:12:59.849 22997.178 - 23116.335: 96.4793% ( 33) 00:12:59.849 23116.335 - 23235.491: 96.7634% ( 28) 00:12:59.849 23235.491 - 23354.647: 97.0576% ( 29) 00:12:59.849 23354.647 - 23473.804: 97.3113% ( 25) 00:12:59.849 23473.804 - 23592.960: 97.5852% ( 27) 00:12:59.849 23592.960 - 23712.116: 97.8186% ( 23) 00:12:59.849 23712.116 - 23831.273: 98.0418% ( 22) 00:12:59.849 23831.273 - 23950.429: 98.2346% ( 19) 00:12:59.849 23950.429 - 24069.585: 98.4172% ( 18) 00:12:59.849 24069.585 - 24188.742: 98.5288% ( 11) 00:12:59.849 24188.742 - 24307.898: 98.6404% ( 11) 00:12:59.849 24307.898 - 24427.055: 98.7216% ( 8) 00:12:59.849 24427.055 - 24546.211: 98.7622% ( 4) 00:12:59.849 24546.211 - 24665.367: 98.8129% ( 5) 00:12:59.849 24665.367 - 24784.524: 98.8636% ( 5) 00:12:59.849 24784.524 - 24903.680: 98.9245% ( 6) 00:12:59.849 24903.680 - 25022.836: 99.0057% ( 8) 00:12:59.849 25022.836 - 25141.993: 99.0869% ( 8) 00:12:59.849 25141.993 - 25261.149: 99.1680% ( 8) 00:12:59.849 25261.149 - 25380.305: 99.2492% ( 8) 00:12:59.849 25380.305 - 25499.462: 99.3101% ( 6) 00:12:59.849 25499.462 - 25618.618: 99.3709% ( 6) 00:12:59.849 25618.618 - 25737.775: 99.4318% ( 6) 00:12:59.849 25737.775 - 25856.931: 99.4927% ( 6) 00:12:59.849 25856.931 - 25976.087: 99.5333% ( 4) 00:12:59.849 25976.087 - 26095.244: 99.5840% ( 5) 00:12:59.849 26095.244 - 26214.400: 99.6449% ( 6) 00:12:59.849 26214.400 - 26333.556: 99.6855% ( 4) 00:12:59.849 26333.556 - 26452.713: 99.7159% ( 3) 00:12:59.849 26452.713 - 26571.869: 99.7565% ( 4) 00:12:59.849 26571.869 - 26691.025: 99.7971% ( 4) 00:12:59.849 26691.025 - 26810.182: 99.8377% ( 4) 00:12:59.849 26810.182 - 26929.338: 99.8580% ( 2) 00:12:59.849 26929.338 - 27048.495: 99.8681% ( 1) 00:12:59.849 27048.495 - 27167.651: 99.8985% ( 3) 00:12:59.849 27167.651 - 27286.807: 99.9391% ( 4) 00:12:59.849 27286.807 - 27405.964: 99.9696% ( 3) 00:12:59.849 27405.964 - 27525.120: 100.0000% ( 3) 00:12:59.849 00:12:59.849 13:54:24 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:13:01.222 Initializing NVMe Controllers 00:13:01.222 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:01.222 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:01.222 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:01.222 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:01.222 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:01.222 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:01.222 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:01.222 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:01.222 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:01.222 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:01.222 Initialization complete. Launching workers. 00:13:01.222 ======================================================== 00:13:01.222 Latency(us) 00:13:01.222 Device Information : IOPS MiB/s Average min max 00:13:01.222 PCIE (0000:00:10.0) NSID 1 from core 0: 8634.35 101.18 14879.35 9500.98 40096.95 00:13:01.222 PCIE (0000:00:11.0) NSID 1 from core 0: 8634.35 101.18 14853.61 9708.33 37509.56 00:13:01.222 PCIE (0000:00:13.0) NSID 1 from core 0: 8634.35 101.18 14827.22 9589.31 35884.64 00:13:01.222 PCIE (0000:00:12.0) NSID 1 from core 0: 8634.35 101.18 14800.62 9619.45 33493.46 00:13:01.222 PCIE (0000:00:12.0) NSID 2 from core 0: 8634.35 101.18 14774.61 9608.07 31236.82 00:13:01.222 PCIE (0000:00:12.0) NSID 3 from core 0: 8634.35 101.18 14748.54 9751.79 28767.81 00:13:01.222 ======================================================== 00:13:01.222 Total : 51806.12 607.10 14813.99 9500.98 40096.95 00:13:01.222 00:13:01.222 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:13:01.222 ================================================================================= 00:13:01.222 1.00000% : 9949.556us 00:13:01.222 10.00000% : 10545.338us 00:13:01.222 25.00000% : 11141.120us 00:13:01.222 50.00000% : 12273.105us 00:13:01.222 75.00000% : 20256.582us 00:13:01.222 90.00000% : 22639.709us 00:13:01.222 95.00000% : 23354.647us 00:13:01.222 98.00000% : 24307.898us 00:13:01.222 99.00000% : 30265.716us 00:13:01.222 99.50000% : 38606.662us 00:13:01.222 99.90000% : 39798.225us 00:13:01.222 99.99000% : 40274.851us 00:13:01.222 99.99900% : 40274.851us 00:13:01.222 99.99990% : 40274.851us 00:13:01.222 99.99999% : 40274.851us 00:13:01.222 00:13:01.222 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:13:01.222 ================================================================================= 00:13:01.222 1.00000% : 10068.713us 00:13:01.222 10.00000% : 10545.338us 00:13:01.222 25.00000% : 11081.542us 00:13:01.222 50.00000% : 12332.684us 00:13:01.222 75.00000% : 20494.895us 00:13:01.222 90.00000% : 22282.240us 00:13:01.222 95.00000% : 22997.178us 00:13:01.222 98.00000% : 23831.273us 00:13:01.222 99.00000% : 28716.684us 00:13:01.222 99.50000% : 36223.535us 00:13:01.222 99.90000% : 37415.098us 00:13:01.222 99.99000% : 37653.411us 00:13:01.222 99.99900% : 37653.411us 00:13:01.222 99.99990% : 37653.411us 00:13:01.222 99.99999% : 37653.411us 00:13:01.222 00:13:01.222 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:13:01.222 ================================================================================= 00:13:01.222 1.00000% : 10068.713us 00:13:01.222 10.00000% : 10604.916us 00:13:01.222 25.00000% : 11081.542us 00:13:01.222 50.00000% : 12332.684us 00:13:01.222 75.00000% : 20494.895us 00:13:01.222 90.00000% : 22282.240us 00:13:01.222 95.00000% : 22997.178us 00:13:01.222 98.00000% : 23831.273us 00:13:01.222 99.00000% : 27048.495us 00:13:01.222 99.50000% : 34317.033us 00:13:01.222 99.90000% : 35746.909us 00:13:01.222 99.99000% : 35985.222us 00:13:01.222 99.99900% : 35985.222us 00:13:01.222 99.99990% : 35985.222us 00:13:01.222 99.99999% : 35985.222us 00:13:01.222 00:13:01.222 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:13:01.222 ================================================================================= 00:13:01.222 1.00000% : 10009.135us 00:13:01.222 10.00000% : 10604.916us 00:13:01.222 25.00000% : 11081.542us 00:13:01.222 50.00000% : 12332.684us 00:13:01.222 75.00000% : 20494.895us 00:13:01.222 90.00000% : 22282.240us 00:13:01.222 95.00000% : 22997.178us 00:13:01.222 98.00000% : 23712.116us 00:13:01.222 99.00000% : 24546.211us 00:13:01.222 99.50000% : 31933.905us 00:13:01.222 99.90000% : 33363.782us 00:13:01.222 99.99000% : 33602.095us 00:13:01.222 99.99900% : 33602.095us 00:13:01.222 99.99990% : 33602.095us 00:13:01.222 99.99999% : 33602.095us 00:13:01.222 00:13:01.222 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:13:01.222 ================================================================================= 00:13:01.222 1.00000% : 10128.291us 00:13:01.222 10.00000% : 10604.916us 00:13:01.222 25.00000% : 11081.542us 00:13:01.222 50.00000% : 12273.105us 00:13:01.222 75.00000% : 20494.895us 00:13:01.222 90.00000% : 22282.240us 00:13:01.222 95.00000% : 22878.022us 00:13:01.222 98.00000% : 23712.116us 00:13:01.222 99.00000% : 25856.931us 00:13:01.222 99.50000% : 29669.935us 00:13:01.222 99.90000% : 30980.655us 00:13:01.222 99.99000% : 31457.280us 00:13:01.222 99.99900% : 31457.280us 00:13:01.222 99.99990% : 31457.280us 00:13:01.222 99.99999% : 31457.280us 00:13:01.222 00:13:01.222 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:13:01.222 ================================================================================= 00:13:01.222 1.00000% : 10128.291us 00:13:01.222 10.00000% : 10604.916us 00:13:01.222 25.00000% : 11081.542us 00:13:01.222 50.00000% : 12273.105us 00:13:01.222 75.00000% : 20375.738us 00:13:01.222 90.00000% : 22282.240us 00:13:01.222 95.00000% : 22997.178us 00:13:01.222 98.00000% : 23712.116us 00:13:01.222 99.00000% : 24665.367us 00:13:01.222 99.50000% : 27167.651us 00:13:01.222 99.90000% : 28478.371us 00:13:01.222 99.99000% : 28835.840us 00:13:01.222 99.99900% : 28835.840us 00:13:01.222 99.99990% : 28835.840us 00:13:01.222 99.99999% : 28835.840us 00:13:01.222 00:13:01.222 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:13:01.222 ============================================================================== 00:13:01.222 Range in us Cumulative IO count 00:13:01.222 9472.931 - 9532.509: 0.0347% ( 3) 00:13:01.222 9532.509 - 9592.087: 0.0926% ( 5) 00:13:01.222 9592.087 - 9651.665: 0.1620% ( 6) 00:13:01.222 9651.665 - 9711.244: 0.2431% ( 7) 00:13:01.222 9711.244 - 9770.822: 0.3588% ( 10) 00:13:01.222 9770.822 - 9830.400: 0.5324% ( 15) 00:13:01.222 9830.400 - 9889.978: 0.7986% ( 23) 00:13:01.222 9889.978 - 9949.556: 1.3310% ( 46) 00:13:01.222 9949.556 - 10009.135: 1.8171% ( 42) 00:13:01.222 10009.135 - 10068.713: 2.3495% ( 46) 00:13:01.222 10068.713 - 10128.291: 3.0324% ( 59) 00:13:01.222 10128.291 - 10187.869: 3.7731% ( 64) 00:13:01.222 10187.869 - 10247.447: 4.6759% ( 78) 00:13:01.222 10247.447 - 10307.025: 5.6944% ( 88) 00:13:01.222 10307.025 - 10366.604: 6.8750% ( 102) 00:13:01.222 10366.604 - 10426.182: 7.9861% ( 96) 00:13:01.222 10426.182 - 10485.760: 9.0856% ( 95) 00:13:01.222 10485.760 - 10545.338: 10.4051% ( 114) 00:13:01.222 10545.338 - 10604.916: 11.7940% ( 120) 00:13:01.222 10604.916 - 10664.495: 13.2407% ( 125) 00:13:01.222 10664.495 - 10724.073: 14.8727% ( 141) 00:13:01.222 10724.073 - 10783.651: 16.4583% ( 137) 00:13:01.222 10783.651 - 10843.229: 17.9282% ( 127) 00:13:01.222 10843.229 - 10902.807: 19.6759% ( 151) 00:13:01.222 10902.807 - 10962.385: 21.2616% ( 137) 00:13:01.222 10962.385 - 11021.964: 22.9051% ( 142) 00:13:01.222 11021.964 - 11081.542: 24.6296% ( 149) 00:13:01.222 11081.542 - 11141.120: 26.2731% ( 142) 00:13:01.222 11141.120 - 11200.698: 27.8704% ( 138) 00:13:01.222 11200.698 - 11260.276: 29.4213% ( 134) 00:13:01.222 11260.276 - 11319.855: 31.1227% ( 147) 00:13:01.222 11319.855 - 11379.433: 32.7546% ( 141) 00:13:01.222 11379.433 - 11439.011: 34.3171% ( 135) 00:13:01.222 11439.011 - 11498.589: 35.9606% ( 142) 00:13:01.222 11498.589 - 11558.167: 37.6157% ( 143) 00:13:01.222 11558.167 - 11617.745: 38.9699% ( 117) 00:13:01.222 11617.745 - 11677.324: 40.2662% ( 112) 00:13:01.222 11677.324 - 11736.902: 41.4120% ( 99) 00:13:01.222 11736.902 - 11796.480: 42.6736% ( 109) 00:13:01.222 11796.480 - 11856.058: 43.7963% ( 97) 00:13:01.222 11856.058 - 11915.636: 44.7801% ( 85) 00:13:01.222 11915.636 - 11975.215: 45.6134% ( 72) 00:13:01.222 11975.215 - 12034.793: 46.6782% ( 92) 00:13:01.222 12034.793 - 12094.371: 47.5810% ( 78) 00:13:01.222 12094.371 - 12153.949: 48.5185% ( 81) 00:13:01.222 12153.949 - 12213.527: 49.2940% ( 67) 00:13:01.222 12213.527 - 12273.105: 50.0694% ( 67) 00:13:01.222 12273.105 - 12332.684: 50.8333% ( 66) 00:13:01.222 12332.684 - 12392.262: 51.6088% ( 67) 00:13:01.222 12392.262 - 12451.840: 52.3495% ( 64) 00:13:01.222 12451.840 - 12511.418: 53.1019% ( 65) 00:13:01.222 12511.418 - 12570.996: 53.8194% ( 62) 00:13:01.222 12570.996 - 12630.575: 54.6412% ( 71) 00:13:01.222 12630.575 - 12690.153: 55.4282% ( 68) 00:13:01.222 12690.153 - 12749.731: 56.2269% ( 69) 00:13:01.222 12749.731 - 12809.309: 57.0139% ( 68) 00:13:01.222 12809.309 - 12868.887: 57.8588% ( 73) 00:13:01.222 12868.887 - 12928.465: 58.5532% ( 60) 00:13:01.222 12928.465 - 12988.044: 59.3750% ( 71) 00:13:01.222 12988.044 - 13047.622: 60.0694% ( 60) 00:13:01.222 13047.622 - 13107.200: 60.7523% ( 59) 00:13:01.222 13107.200 - 13166.778: 61.4352% ( 59) 00:13:01.222 13166.778 - 13226.356: 62.0718% ( 55) 00:13:01.222 13226.356 - 13285.935: 62.6389% ( 49) 00:13:01.222 13285.935 - 13345.513: 63.1829% ( 47) 00:13:01.222 13345.513 - 13405.091: 63.6921% ( 44) 00:13:01.222 13405.091 - 13464.669: 64.1551% ( 40) 00:13:01.222 13464.669 - 13524.247: 64.6065% ( 39) 00:13:01.222 13524.247 - 13583.825: 65.0347% ( 37) 00:13:01.222 13583.825 - 13643.404: 65.4282% ( 34) 00:13:01.222 13643.404 - 13702.982: 65.7755% ( 30) 00:13:01.222 13702.982 - 13762.560: 66.1806% ( 35) 00:13:01.222 13762.560 - 13822.138: 66.4815% ( 26) 00:13:01.222 13822.138 - 13881.716: 66.8750% ( 34) 00:13:01.222 13881.716 - 13941.295: 67.2685% ( 34) 00:13:01.222 13941.295 - 14000.873: 67.5347% ( 23) 00:13:01.222 14000.873 - 14060.451: 67.7894% ( 22) 00:13:01.222 14060.451 - 14120.029: 68.0440% ( 22) 00:13:01.222 14120.029 - 14179.607: 68.2639% ( 19) 00:13:01.222 14179.607 - 14239.185: 68.4144% ( 13) 00:13:01.222 14239.185 - 14298.764: 68.5532% ( 12) 00:13:01.222 14298.764 - 14358.342: 68.6343% ( 7) 00:13:01.223 14358.342 - 14417.920: 68.7153% ( 7) 00:13:01.223 14417.920 - 14477.498: 68.7616% ( 4) 00:13:01.223 14477.498 - 14537.076: 68.8310% ( 6) 00:13:01.223 14537.076 - 14596.655: 68.8542% ( 2) 00:13:01.223 14596.655 - 14656.233: 68.8773% ( 2) 00:13:01.223 14656.233 - 14715.811: 68.8889% ( 1) 00:13:01.223 16443.578 - 16562.735: 68.9583% ( 6) 00:13:01.223 16562.735 - 16681.891: 69.0741% ( 10) 00:13:01.223 16681.891 - 16801.047: 69.1435% ( 6) 00:13:01.223 16801.047 - 16920.204: 69.2477% ( 9) 00:13:01.223 16920.204 - 17039.360: 69.3403% ( 8) 00:13:01.223 17039.360 - 17158.516: 69.4444% ( 9) 00:13:01.223 17158.516 - 17277.673: 69.5255% ( 7) 00:13:01.223 17277.673 - 17396.829: 69.6181% ( 8) 00:13:01.223 17396.829 - 17515.985: 69.7222% ( 9) 00:13:01.223 17515.985 - 17635.142: 69.8148% ( 8) 00:13:01.223 17635.142 - 17754.298: 69.9190% ( 9) 00:13:01.223 17754.298 - 17873.455: 69.9884% ( 6) 00:13:01.223 17873.455 - 17992.611: 70.0926% ( 9) 00:13:01.223 17992.611 - 18111.767: 70.1852% ( 8) 00:13:01.223 18111.767 - 18230.924: 70.2662% ( 7) 00:13:01.223 18230.924 - 18350.080: 70.3704% ( 9) 00:13:01.223 18350.080 - 18469.236: 70.4630% ( 8) 00:13:01.223 18469.236 - 18588.393: 70.5671% ( 9) 00:13:01.223 18588.393 - 18707.549: 70.6713% ( 9) 00:13:01.223 18707.549 - 18826.705: 70.7870% ( 10) 00:13:01.223 18826.705 - 18945.862: 70.8912% ( 9) 00:13:01.223 18945.862 - 19065.018: 70.9954% ( 9) 00:13:01.223 19065.018 - 19184.175: 71.1111% ( 10) 00:13:01.223 19184.175 - 19303.331: 71.2384% ( 11) 00:13:01.223 19303.331 - 19422.487: 71.4699% ( 20) 00:13:01.223 19422.487 - 19541.644: 71.8287% ( 31) 00:13:01.223 19541.644 - 19660.800: 72.4421% ( 53) 00:13:01.223 19660.800 - 19779.956: 72.9167% ( 41) 00:13:01.223 19779.956 - 19899.113: 73.6111% ( 60) 00:13:01.223 19899.113 - 20018.269: 74.1435% ( 46) 00:13:01.223 20018.269 - 20137.425: 74.8032% ( 57) 00:13:01.223 20137.425 - 20256.582: 75.3356% ( 46) 00:13:01.223 20256.582 - 20375.738: 76.0069% ( 58) 00:13:01.223 20375.738 - 20494.895: 76.7708% ( 66) 00:13:01.223 20494.895 - 20614.051: 77.6157% ( 73) 00:13:01.223 20614.051 - 20733.207: 78.5301% ( 79) 00:13:01.223 20733.207 - 20852.364: 79.2593% ( 63) 00:13:01.223 20852.364 - 20971.520: 79.9653% ( 61) 00:13:01.223 20971.520 - 21090.676: 80.6944% ( 63) 00:13:01.223 21090.676 - 21209.833: 81.4583% ( 66) 00:13:01.223 21209.833 - 21328.989: 82.1759% ( 62) 00:13:01.223 21328.989 - 21448.145: 82.9051% ( 63) 00:13:01.223 21448.145 - 21567.302: 83.6806% ( 67) 00:13:01.223 21567.302 - 21686.458: 84.4560% ( 67) 00:13:01.223 21686.458 - 21805.615: 85.1389% ( 59) 00:13:01.223 21805.615 - 21924.771: 85.9259% ( 68) 00:13:01.223 21924.771 - 22043.927: 86.7130% ( 68) 00:13:01.223 22043.927 - 22163.084: 87.5116% ( 69) 00:13:01.223 22163.084 - 22282.240: 88.3218% ( 70) 00:13:01.223 22282.240 - 22401.396: 89.1551% ( 72) 00:13:01.223 22401.396 - 22520.553: 89.9421% ( 68) 00:13:01.223 22520.553 - 22639.709: 90.7870% ( 73) 00:13:01.223 22639.709 - 22758.865: 91.6435% ( 74) 00:13:01.223 22758.865 - 22878.022: 92.4074% ( 66) 00:13:01.223 22878.022 - 22997.178: 93.1250% ( 62) 00:13:01.223 22997.178 - 23116.335: 93.9352% ( 70) 00:13:01.223 23116.335 - 23235.491: 94.8148% ( 76) 00:13:01.223 23235.491 - 23354.647: 95.3588% ( 47) 00:13:01.223 23354.647 - 23473.804: 95.9606% ( 52) 00:13:01.223 23473.804 - 23592.960: 96.3657% ( 35) 00:13:01.223 23592.960 - 23712.116: 96.7361% ( 32) 00:13:01.223 23712.116 - 23831.273: 97.0602% ( 28) 00:13:01.223 23831.273 - 23950.429: 97.3958% ( 29) 00:13:01.223 23950.429 - 24069.585: 97.7199% ( 28) 00:13:01.223 24069.585 - 24188.742: 97.9977% ( 24) 00:13:01.223 24188.742 - 24307.898: 98.2292% ( 20) 00:13:01.223 24307.898 - 24427.055: 98.3796% ( 13) 00:13:01.223 24427.055 - 24546.211: 98.4491% ( 6) 00:13:01.223 24546.211 - 24665.367: 98.4838% ( 3) 00:13:01.223 24665.367 - 24784.524: 98.5069% ( 2) 00:13:01.223 24784.524 - 24903.680: 98.5185% ( 1) 00:13:01.223 28359.215 - 28478.371: 98.5417% ( 2) 00:13:01.223 28478.371 - 28597.527: 98.5648% ( 2) 00:13:01.223 28597.527 - 28716.684: 98.5880% ( 2) 00:13:01.223 28716.684 - 28835.840: 98.6227% ( 3) 00:13:01.223 28835.840 - 28954.996: 98.6690% ( 4) 00:13:01.223 28954.996 - 29074.153: 98.7037% ( 3) 00:13:01.223 29074.153 - 29193.309: 98.7384% ( 3) 00:13:01.223 29193.309 - 29312.465: 98.7731% ( 3) 00:13:01.223 29312.465 - 29431.622: 98.8079% ( 3) 00:13:01.223 29431.622 - 29550.778: 98.8426% ( 3) 00:13:01.223 29550.778 - 29669.935: 98.8773% ( 3) 00:13:01.223 29669.935 - 29789.091: 98.9005% ( 2) 00:13:01.223 29789.091 - 29908.247: 98.9236% ( 2) 00:13:01.223 29908.247 - 30027.404: 98.9699% ( 4) 00:13:01.223 30027.404 - 30146.560: 98.9931% ( 2) 00:13:01.223 30146.560 - 30265.716: 99.0394% ( 4) 00:13:01.223 30265.716 - 30384.873: 99.0741% ( 3) 00:13:01.223 30384.873 - 30504.029: 99.1088% ( 3) 00:13:01.223 30504.029 - 30742.342: 99.1667% ( 5) 00:13:01.223 30742.342 - 30980.655: 99.2477% ( 7) 00:13:01.223 30980.655 - 31218.967: 99.2593% ( 1) 00:13:01.223 37415.098 - 37653.411: 99.2708% ( 1) 00:13:01.223 37653.411 - 37891.724: 99.3403% ( 6) 00:13:01.223 37891.724 - 38130.036: 99.4097% ( 6) 00:13:01.223 38130.036 - 38368.349: 99.4676% ( 5) 00:13:01.223 38368.349 - 38606.662: 99.5486% ( 7) 00:13:01.223 38606.662 - 38844.975: 99.6181% ( 6) 00:13:01.223 38844.975 - 39083.287: 99.6991% ( 7) 00:13:01.223 39083.287 - 39321.600: 99.7685% ( 6) 00:13:01.223 39321.600 - 39559.913: 99.8380% ( 6) 00:13:01.223 39559.913 - 39798.225: 99.9074% ( 6) 00:13:01.223 39798.225 - 40036.538: 99.9769% ( 6) 00:13:01.223 40036.538 - 40274.851: 100.0000% ( 2) 00:13:01.223 00:13:01.223 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:13:01.223 ============================================================================== 00:13:01.223 Range in us Cumulative IO count 00:13:01.223 9651.665 - 9711.244: 0.0116% ( 1) 00:13:01.223 9711.244 - 9770.822: 0.1042% ( 8) 00:13:01.223 9770.822 - 9830.400: 0.1736% ( 6) 00:13:01.223 9830.400 - 9889.978: 0.2546% ( 7) 00:13:01.223 9889.978 - 9949.556: 0.3819% ( 11) 00:13:01.223 9949.556 - 10009.135: 0.6713% ( 25) 00:13:01.223 10009.135 - 10068.713: 1.2269% ( 48) 00:13:01.223 10068.713 - 10128.291: 1.9213% ( 60) 00:13:01.223 10128.291 - 10187.869: 2.6968% ( 67) 00:13:01.223 10187.869 - 10247.447: 3.3912% ( 60) 00:13:01.223 10247.447 - 10307.025: 4.3981% ( 87) 00:13:01.223 10307.025 - 10366.604: 5.4051% ( 87) 00:13:01.223 10366.604 - 10426.182: 6.7477% ( 116) 00:13:01.223 10426.182 - 10485.760: 8.4954% ( 151) 00:13:01.223 10485.760 - 10545.338: 10.1389% ( 142) 00:13:01.223 10545.338 - 10604.916: 11.7477% ( 139) 00:13:01.223 10604.916 - 10664.495: 13.2639% ( 131) 00:13:01.223 10664.495 - 10724.073: 14.9306% ( 144) 00:13:01.223 10724.073 - 10783.651: 16.4583% ( 132) 00:13:01.223 10783.651 - 10843.229: 18.1944% ( 150) 00:13:01.223 10843.229 - 10902.807: 19.9421% ( 151) 00:13:01.223 10902.807 - 10962.385: 21.8171% ( 162) 00:13:01.223 10962.385 - 11021.964: 23.6921% ( 162) 00:13:01.223 11021.964 - 11081.542: 25.4630% ( 153) 00:13:01.223 11081.542 - 11141.120: 27.2917% ( 158) 00:13:01.223 11141.120 - 11200.698: 28.9931% ( 147) 00:13:01.223 11200.698 - 11260.276: 30.5787% ( 137) 00:13:01.223 11260.276 - 11319.855: 32.1296% ( 134) 00:13:01.223 11319.855 - 11379.433: 33.6111% ( 128) 00:13:01.223 11379.433 - 11439.011: 34.9653% ( 117) 00:13:01.223 11439.011 - 11498.589: 36.0995% ( 98) 00:13:01.223 11498.589 - 11558.167: 37.2338% ( 98) 00:13:01.223 11558.167 - 11617.745: 38.4375% ( 104) 00:13:01.223 11617.745 - 11677.324: 39.6991% ( 109) 00:13:01.223 11677.324 - 11736.902: 40.9606% ( 109) 00:13:01.223 11736.902 - 11796.480: 42.0139% ( 91) 00:13:01.223 11796.480 - 11856.058: 43.0440% ( 89) 00:13:01.223 11856.058 - 11915.636: 44.1088% ( 92) 00:13:01.223 11915.636 - 11975.215: 45.1968% ( 94) 00:13:01.223 11975.215 - 12034.793: 46.2500% ( 91) 00:13:01.223 12034.793 - 12094.371: 47.1065% ( 74) 00:13:01.223 12094.371 - 12153.949: 47.9514% ( 73) 00:13:01.223 12153.949 - 12213.527: 48.6806% ( 63) 00:13:01.223 12213.527 - 12273.105: 49.4213% ( 64) 00:13:01.223 12273.105 - 12332.684: 50.0926% ( 58) 00:13:01.223 12332.684 - 12392.262: 50.8565% ( 66) 00:13:01.223 12392.262 - 12451.840: 51.7245% ( 75) 00:13:01.223 12451.840 - 12511.418: 52.5694% ( 73) 00:13:01.223 12511.418 - 12570.996: 53.3449% ( 67) 00:13:01.223 12570.996 - 12630.575: 54.1204% ( 67) 00:13:01.223 12630.575 - 12690.153: 54.8727% ( 65) 00:13:01.223 12690.153 - 12749.731: 55.6944% ( 71) 00:13:01.223 12749.731 - 12809.309: 56.4120% ( 62) 00:13:01.223 12809.309 - 12868.887: 57.1296% ( 62) 00:13:01.223 12868.887 - 12928.465: 57.8241% ( 60) 00:13:01.223 12928.465 - 12988.044: 58.6690% ( 73) 00:13:01.223 12988.044 - 13047.622: 59.4213% ( 65) 00:13:01.223 13047.622 - 13107.200: 60.1273% ( 61) 00:13:01.223 13107.200 - 13166.778: 60.7870% ( 57) 00:13:01.223 13166.778 - 13226.356: 61.4931% ( 61) 00:13:01.223 13226.356 - 13285.935: 62.1528% ( 57) 00:13:01.223 13285.935 - 13345.513: 62.8009% ( 56) 00:13:01.223 13345.513 - 13405.091: 63.4606% ( 57) 00:13:01.223 13405.091 - 13464.669: 64.0278% ( 49) 00:13:01.223 13464.669 - 13524.247: 64.5370% ( 44) 00:13:01.223 13524.247 - 13583.825: 64.9769% ( 38) 00:13:01.223 13583.825 - 13643.404: 65.3819% ( 35) 00:13:01.223 13643.404 - 13702.982: 65.7176% ( 29) 00:13:01.223 13702.982 - 13762.560: 66.0880% ( 32) 00:13:01.223 13762.560 - 13822.138: 66.4583% ( 32) 00:13:01.223 13822.138 - 13881.716: 66.7940% ( 29) 00:13:01.223 13881.716 - 13941.295: 67.0833% ( 25) 00:13:01.223 13941.295 - 14000.873: 67.3727% ( 25) 00:13:01.223 14000.873 - 14060.451: 67.6273% ( 22) 00:13:01.223 14060.451 - 14120.029: 67.8935% ( 23) 00:13:01.223 14120.029 - 14179.607: 68.0556% ( 14) 00:13:01.223 14179.607 - 14239.185: 68.1713% ( 10) 00:13:01.223 14239.185 - 14298.764: 68.2639% ( 8) 00:13:01.223 14298.764 - 14358.342: 68.3681% ( 9) 00:13:01.223 14358.342 - 14417.920: 68.4375% ( 6) 00:13:01.223 14417.920 - 14477.498: 68.4954% ( 5) 00:13:01.223 14477.498 - 14537.076: 68.5185% ( 2) 00:13:01.223 14537.076 - 14596.655: 68.5648% ( 4) 00:13:01.223 14596.655 - 14656.233: 68.5995% ( 3) 00:13:01.223 14656.233 - 14715.811: 68.6343% ( 3) 00:13:01.223 14715.811 - 14775.389: 68.6690% ( 3) 00:13:01.223 14775.389 - 14834.967: 68.6806% ( 1) 00:13:01.223 14834.967 - 14894.545: 68.7037% ( 2) 00:13:01.223 14894.545 - 14954.124: 68.7153% ( 1) 00:13:01.223 14954.124 - 15013.702: 68.7384% ( 2) 00:13:01.223 15013.702 - 15073.280: 68.7500% ( 1) 00:13:01.223 15073.280 - 15132.858: 68.7731% ( 2) 00:13:01.223 15132.858 - 15192.436: 68.7963% ( 2) 00:13:01.223 15192.436 - 15252.015: 68.8194% ( 2) 00:13:01.223 15252.015 - 15371.171: 68.8542% ( 3) 00:13:01.223 15371.171 - 15490.327: 68.8773% ( 2) 00:13:01.223 15490.327 - 15609.484: 68.8889% ( 1) 00:13:01.223 15728.640 - 15847.796: 69.0046% ( 10) 00:13:01.223 15847.796 - 15966.953: 69.0394% ( 3) 00:13:01.223 15966.953 - 16086.109: 69.0856% ( 4) 00:13:01.223 16086.109 - 16205.265: 69.1319% ( 4) 00:13:01.223 16205.265 - 16324.422: 69.1667% ( 3) 00:13:01.224 16324.422 - 16443.578: 69.2014% ( 3) 00:13:01.224 16443.578 - 16562.735: 69.2361% ( 3) 00:13:01.224 16562.735 - 16681.891: 69.2824% ( 4) 00:13:01.224 16681.891 - 16801.047: 69.3171% ( 3) 00:13:01.224 16801.047 - 16920.204: 69.3519% ( 3) 00:13:01.224 16920.204 - 17039.360: 69.3981% ( 4) 00:13:01.224 17039.360 - 17158.516: 69.4329% ( 3) 00:13:01.224 17158.516 - 17277.673: 69.4792% ( 4) 00:13:01.224 17277.673 - 17396.829: 69.5139% ( 3) 00:13:01.224 17396.829 - 17515.985: 69.5602% ( 4) 00:13:01.224 17515.985 - 17635.142: 69.5949% ( 3) 00:13:01.224 17635.142 - 17754.298: 69.6296% ( 3) 00:13:01.224 17992.611 - 18111.767: 69.7338% ( 9) 00:13:01.224 18111.767 - 18230.924: 69.8264% ( 8) 00:13:01.224 18230.924 - 18350.080: 69.9074% ( 7) 00:13:01.224 18350.080 - 18469.236: 70.0000% ( 8) 00:13:01.224 18469.236 - 18588.393: 70.0810% ( 7) 00:13:01.224 18588.393 - 18707.549: 70.1852% ( 9) 00:13:01.224 18707.549 - 18826.705: 70.3009% ( 10) 00:13:01.224 18826.705 - 18945.862: 70.4282% ( 11) 00:13:01.224 18945.862 - 19065.018: 70.5556% ( 11) 00:13:01.224 19065.018 - 19184.175: 70.6481% ( 8) 00:13:01.224 19184.175 - 19303.331: 70.7986% ( 13) 00:13:01.224 19303.331 - 19422.487: 70.9259% ( 11) 00:13:01.224 19422.487 - 19541.644: 71.0185% ( 8) 00:13:01.224 19541.644 - 19660.800: 71.1458% ( 11) 00:13:01.224 19660.800 - 19779.956: 71.3542% ( 18) 00:13:01.224 19779.956 - 19899.113: 71.7477% ( 34) 00:13:01.224 19899.113 - 20018.269: 72.4653% ( 62) 00:13:01.224 20018.269 - 20137.425: 73.2755% ( 70) 00:13:01.224 20137.425 - 20256.582: 74.1551% ( 76) 00:13:01.224 20256.582 - 20375.738: 74.9306% ( 67) 00:13:01.224 20375.738 - 20494.895: 75.6713% ( 64) 00:13:01.224 20494.895 - 20614.051: 76.4931% ( 71) 00:13:01.224 20614.051 - 20733.207: 77.3032% ( 70) 00:13:01.224 20733.207 - 20852.364: 78.1019% ( 69) 00:13:01.224 20852.364 - 20971.520: 79.0741% ( 84) 00:13:01.224 20971.520 - 21090.676: 80.0810% ( 87) 00:13:01.224 21090.676 - 21209.833: 81.1458% ( 92) 00:13:01.224 21209.833 - 21328.989: 82.2569% ( 96) 00:13:01.224 21328.989 - 21448.145: 83.3218% ( 92) 00:13:01.224 21448.145 - 21567.302: 84.2940% ( 84) 00:13:01.224 21567.302 - 21686.458: 85.3241% ( 89) 00:13:01.224 21686.458 - 21805.615: 86.2847% ( 83) 00:13:01.224 21805.615 - 21924.771: 87.3032% ( 88) 00:13:01.224 21924.771 - 22043.927: 88.1944% ( 77) 00:13:01.224 22043.927 - 22163.084: 89.1088% ( 79) 00:13:01.224 22163.084 - 22282.240: 90.0116% ( 78) 00:13:01.224 22282.240 - 22401.396: 90.9606% ( 82) 00:13:01.224 22401.396 - 22520.553: 91.8287% ( 75) 00:13:01.224 22520.553 - 22639.709: 92.7546% ( 80) 00:13:01.224 22639.709 - 22758.865: 93.7037% ( 82) 00:13:01.224 22758.865 - 22878.022: 94.5023% ( 69) 00:13:01.224 22878.022 - 22997.178: 95.3356% ( 72) 00:13:01.224 22997.178 - 23116.335: 95.9838% ( 56) 00:13:01.224 23116.335 - 23235.491: 96.4583% ( 41) 00:13:01.224 23235.491 - 23354.647: 96.8403% ( 33) 00:13:01.224 23354.647 - 23473.804: 97.1759% ( 29) 00:13:01.224 23473.804 - 23592.960: 97.4653% ( 25) 00:13:01.224 23592.960 - 23712.116: 97.8125% ( 30) 00:13:01.224 23712.116 - 23831.273: 98.1134% ( 26) 00:13:01.224 23831.273 - 23950.429: 98.3449% ( 20) 00:13:01.224 23950.429 - 24069.585: 98.4838% ( 12) 00:13:01.224 24069.585 - 24188.742: 98.5185% ( 3) 00:13:01.224 27048.495 - 27167.651: 98.5417% ( 2) 00:13:01.224 27167.651 - 27286.807: 98.5764% ( 3) 00:13:01.224 27286.807 - 27405.964: 98.6111% ( 3) 00:13:01.224 27405.964 - 27525.120: 98.6458% ( 3) 00:13:01.224 27525.120 - 27644.276: 98.6921% ( 4) 00:13:01.224 27644.276 - 27763.433: 98.7269% ( 3) 00:13:01.224 27763.433 - 27882.589: 98.7616% ( 3) 00:13:01.224 27882.589 - 28001.745: 98.8079% ( 4) 00:13:01.224 28001.745 - 28120.902: 98.8426% ( 3) 00:13:01.224 28120.902 - 28240.058: 98.8773% ( 3) 00:13:01.224 28240.058 - 28359.215: 98.9120% ( 3) 00:13:01.224 28359.215 - 28478.371: 98.9583% ( 4) 00:13:01.224 28478.371 - 28597.527: 98.9931% ( 3) 00:13:01.224 28597.527 - 28716.684: 99.0278% ( 3) 00:13:01.224 28716.684 - 28835.840: 99.0625% ( 3) 00:13:01.224 28835.840 - 28954.996: 99.0972% ( 3) 00:13:01.224 28954.996 - 29074.153: 99.1435% ( 4) 00:13:01.224 29074.153 - 29193.309: 99.1667% ( 2) 00:13:01.224 29193.309 - 29312.465: 99.2014% ( 3) 00:13:01.224 29312.465 - 29431.622: 99.2361% ( 3) 00:13:01.224 29431.622 - 29550.778: 99.2593% ( 2) 00:13:01.224 35031.971 - 35270.284: 99.2708% ( 1) 00:13:01.224 35270.284 - 35508.596: 99.3519% ( 7) 00:13:01.224 35508.596 - 35746.909: 99.4213% ( 6) 00:13:01.224 35746.909 - 35985.222: 99.4907% ( 6) 00:13:01.224 35985.222 - 36223.535: 99.5718% ( 7) 00:13:01.224 36223.535 - 36461.847: 99.6412% ( 6) 00:13:01.224 36461.847 - 36700.160: 99.7338% ( 8) 00:13:01.224 36700.160 - 36938.473: 99.8148% ( 7) 00:13:01.224 36938.473 - 37176.785: 99.8843% ( 6) 00:13:01.224 37176.785 - 37415.098: 99.9653% ( 7) 00:13:01.224 37415.098 - 37653.411: 100.0000% ( 3) 00:13:01.224 00:13:01.224 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:13:01.224 ============================================================================== 00:13:01.224 Range in us Cumulative IO count 00:13:01.224 9532.509 - 9592.087: 0.0231% ( 2) 00:13:01.224 9592.087 - 9651.665: 0.0694% ( 4) 00:13:01.224 9651.665 - 9711.244: 0.1157% ( 4) 00:13:01.224 9711.244 - 9770.822: 0.1505% ( 3) 00:13:01.224 9770.822 - 9830.400: 0.2778% ( 11) 00:13:01.224 9830.400 - 9889.978: 0.5093% ( 20) 00:13:01.224 9889.978 - 9949.556: 0.7639% ( 22) 00:13:01.224 9949.556 - 10009.135: 0.9722% ( 18) 00:13:01.224 10009.135 - 10068.713: 1.4352% ( 40) 00:13:01.224 10068.713 - 10128.291: 1.9792% ( 47) 00:13:01.224 10128.291 - 10187.869: 2.5694% ( 51) 00:13:01.224 10187.869 - 10247.447: 3.1944% ( 54) 00:13:01.224 10247.447 - 10307.025: 4.0972% ( 78) 00:13:01.224 10307.025 - 10366.604: 5.2199% ( 97) 00:13:01.224 10366.604 - 10426.182: 6.5509% ( 115) 00:13:01.224 10426.182 - 10485.760: 7.9282% ( 119) 00:13:01.224 10485.760 - 10545.338: 9.5833% ( 143) 00:13:01.224 10545.338 - 10604.916: 11.2847% ( 147) 00:13:01.224 10604.916 - 10664.495: 12.8704% ( 137) 00:13:01.224 10664.495 - 10724.073: 14.4792% ( 139) 00:13:01.224 10724.073 - 10783.651: 16.2269% ( 151) 00:13:01.224 10783.651 - 10843.229: 17.9861% ( 152) 00:13:01.224 10843.229 - 10902.807: 19.8148% ( 158) 00:13:01.224 10902.807 - 10962.385: 21.5046% ( 146) 00:13:01.224 10962.385 - 11021.964: 23.3565% ( 160) 00:13:01.224 11021.964 - 11081.542: 25.1157% ( 152) 00:13:01.224 11081.542 - 11141.120: 26.9213% ( 156) 00:13:01.224 11141.120 - 11200.698: 28.7153% ( 155) 00:13:01.224 11200.698 - 11260.276: 30.3704% ( 143) 00:13:01.224 11260.276 - 11319.855: 32.0023% ( 141) 00:13:01.224 11319.855 - 11379.433: 33.5301% ( 132) 00:13:01.224 11379.433 - 11439.011: 35.0579% ( 132) 00:13:01.224 11439.011 - 11498.589: 36.4931% ( 124) 00:13:01.224 11498.589 - 11558.167: 37.9745% ( 128) 00:13:01.224 11558.167 - 11617.745: 39.2245% ( 108) 00:13:01.224 11617.745 - 11677.324: 40.3935% ( 101) 00:13:01.224 11677.324 - 11736.902: 41.5625% ( 101) 00:13:01.224 11736.902 - 11796.480: 42.8009% ( 107) 00:13:01.224 11796.480 - 11856.058: 43.8657% ( 92) 00:13:01.224 11856.058 - 11915.636: 44.7917% ( 80) 00:13:01.224 11915.636 - 11975.215: 45.7292% ( 81) 00:13:01.224 11975.215 - 12034.793: 46.5509% ( 71) 00:13:01.224 12034.793 - 12094.371: 47.3380% ( 68) 00:13:01.224 12094.371 - 12153.949: 48.2523% ( 79) 00:13:01.224 12153.949 - 12213.527: 49.0509% ( 69) 00:13:01.224 12213.527 - 12273.105: 49.8380% ( 68) 00:13:01.224 12273.105 - 12332.684: 50.6134% ( 67) 00:13:01.224 12332.684 - 12392.262: 51.3657% ( 65) 00:13:01.224 12392.262 - 12451.840: 52.1065% ( 64) 00:13:01.224 12451.840 - 12511.418: 52.8935% ( 68) 00:13:01.224 12511.418 - 12570.996: 53.5532% ( 57) 00:13:01.224 12570.996 - 12630.575: 54.3056% ( 65) 00:13:01.224 12630.575 - 12690.153: 55.0579% ( 65) 00:13:01.224 12690.153 - 12749.731: 55.7755% ( 62) 00:13:01.224 12749.731 - 12809.309: 56.5972% ( 71) 00:13:01.224 12809.309 - 12868.887: 57.3843% ( 68) 00:13:01.224 12868.887 - 12928.465: 58.1829% ( 69) 00:13:01.224 12928.465 - 12988.044: 58.9352% ( 65) 00:13:01.224 12988.044 - 13047.622: 59.6296% ( 60) 00:13:01.224 13047.622 - 13107.200: 60.3009% ( 58) 00:13:01.224 13107.200 - 13166.778: 60.9606% ( 57) 00:13:01.224 13166.778 - 13226.356: 61.4931% ( 46) 00:13:01.224 13226.356 - 13285.935: 62.0833% ( 51) 00:13:01.224 13285.935 - 13345.513: 62.6273% ( 47) 00:13:01.224 13345.513 - 13405.091: 63.0787% ( 39) 00:13:01.224 13405.091 - 13464.669: 63.5532% ( 41) 00:13:01.224 13464.669 - 13524.247: 63.9931% ( 38) 00:13:01.224 13524.247 - 13583.825: 64.4444% ( 39) 00:13:01.224 13583.825 - 13643.404: 64.8495% ( 35) 00:13:01.224 13643.404 - 13702.982: 65.2894% ( 38) 00:13:01.224 13702.982 - 13762.560: 65.6829% ( 34) 00:13:01.224 13762.560 - 13822.138: 66.0301% ( 30) 00:13:01.224 13822.138 - 13881.716: 66.3889% ( 31) 00:13:01.224 13881.716 - 13941.295: 66.7477% ( 31) 00:13:01.224 13941.295 - 14000.873: 67.0718% ( 28) 00:13:01.224 14000.873 - 14060.451: 67.3148% ( 21) 00:13:01.224 14060.451 - 14120.029: 67.5116% ( 17) 00:13:01.225 14120.029 - 14179.607: 67.7083% ( 17) 00:13:01.225 14179.607 - 14239.185: 67.8472% ( 12) 00:13:01.225 14239.185 - 14298.764: 67.9745% ( 11) 00:13:01.225 14298.764 - 14358.342: 68.1134% ( 12) 00:13:01.225 14358.342 - 14417.920: 68.2176% ( 9) 00:13:01.225 14417.920 - 14477.498: 68.2870% ( 6) 00:13:01.225 14477.498 - 14537.076: 68.3218% ( 3) 00:13:01.225 14537.076 - 14596.655: 68.3681% ( 4) 00:13:01.225 14596.655 - 14656.233: 68.4144% ( 4) 00:13:01.225 14656.233 - 14715.811: 68.4491% ( 3) 00:13:01.225 14715.811 - 14775.389: 68.4838% ( 3) 00:13:01.225 14775.389 - 14834.967: 68.5185% ( 3) 00:13:01.225 14834.967 - 14894.545: 68.5532% ( 3) 00:13:01.225 14894.545 - 14954.124: 68.5880% ( 3) 00:13:01.225 14954.124 - 15013.702: 68.6343% ( 4) 00:13:01.225 15013.702 - 15073.280: 68.6574% ( 2) 00:13:01.225 15073.280 - 15132.858: 68.6690% ( 1) 00:13:01.225 15132.858 - 15192.436: 68.6921% ( 2) 00:13:01.225 15192.436 - 15252.015: 68.7037% ( 1) 00:13:01.225 15252.015 - 15371.171: 68.7384% ( 3) 00:13:01.225 15371.171 - 15490.327: 68.7731% ( 3) 00:13:01.225 15490.327 - 15609.484: 68.9352% ( 14) 00:13:01.225 15609.484 - 15728.640: 69.0162% ( 7) 00:13:01.225 15728.640 - 15847.796: 69.0856% ( 6) 00:13:01.225 15847.796 - 15966.953: 69.1204% ( 3) 00:13:01.225 15966.953 - 16086.109: 69.1551% ( 3) 00:13:01.225 16086.109 - 16205.265: 69.1898% ( 3) 00:13:01.225 16205.265 - 16324.422: 69.2245% ( 3) 00:13:01.225 16324.422 - 16443.578: 69.2708% ( 4) 00:13:01.225 16443.578 - 16562.735: 69.3056% ( 3) 00:13:01.225 16562.735 - 16681.891: 69.3519% ( 4) 00:13:01.225 16681.891 - 16801.047: 69.3866% ( 3) 00:13:01.225 16801.047 - 16920.204: 69.4329% ( 4) 00:13:01.225 16920.204 - 17039.360: 69.4792% ( 4) 00:13:01.225 17039.360 - 17158.516: 69.5139% ( 3) 00:13:01.225 17158.516 - 17277.673: 69.5602% ( 4) 00:13:01.225 17277.673 - 17396.829: 69.5833% ( 2) 00:13:01.225 17396.829 - 17515.985: 69.6181% ( 3) 00:13:01.225 17515.985 - 17635.142: 69.6296% ( 1) 00:13:01.225 18111.767 - 18230.924: 69.6759% ( 4) 00:13:01.225 18230.924 - 18350.080: 69.7222% ( 4) 00:13:01.225 18350.080 - 18469.236: 69.7569% ( 3) 00:13:01.225 18469.236 - 18588.393: 69.9190% ( 14) 00:13:01.225 18588.393 - 18707.549: 70.0810% ( 14) 00:13:01.225 18707.549 - 18826.705: 70.2315% ( 13) 00:13:01.225 18826.705 - 18945.862: 70.3241% ( 8) 00:13:01.225 18945.862 - 19065.018: 70.4051% ( 7) 00:13:01.225 19065.018 - 19184.175: 70.4977% ( 8) 00:13:01.225 19184.175 - 19303.331: 70.5903% ( 8) 00:13:01.225 19303.331 - 19422.487: 70.6829% ( 8) 00:13:01.225 19422.487 - 19541.644: 70.7639% ( 7) 00:13:01.225 19541.644 - 19660.800: 70.9259% ( 14) 00:13:01.225 19660.800 - 19779.956: 71.2037% ( 24) 00:13:01.225 19779.956 - 19899.113: 71.4583% ( 22) 00:13:01.225 19899.113 - 20018.269: 72.1412% ( 59) 00:13:01.225 20018.269 - 20137.425: 72.8819% ( 64) 00:13:01.225 20137.425 - 20256.582: 73.6806% ( 69) 00:13:01.225 20256.582 - 20375.738: 74.5949% ( 79) 00:13:01.225 20375.738 - 20494.895: 75.4977% ( 78) 00:13:01.225 20494.895 - 20614.051: 76.2963% ( 69) 00:13:01.225 20614.051 - 20733.207: 77.2338% ( 81) 00:13:01.225 20733.207 - 20852.364: 78.3218% ( 94) 00:13:01.225 20852.364 - 20971.520: 79.3981% ( 93) 00:13:01.225 20971.520 - 21090.676: 80.4745% ( 93) 00:13:01.225 21090.676 - 21209.833: 81.6088% ( 98) 00:13:01.225 21209.833 - 21328.989: 82.6389% ( 89) 00:13:01.225 21328.989 - 21448.145: 83.6921% ( 91) 00:13:01.225 21448.145 - 21567.302: 84.6296% ( 81) 00:13:01.225 21567.302 - 21686.458: 85.5208% ( 77) 00:13:01.225 21686.458 - 21805.615: 86.4468% ( 80) 00:13:01.225 21805.615 - 21924.771: 87.3727% ( 80) 00:13:01.225 21924.771 - 22043.927: 88.2870% ( 79) 00:13:01.225 22043.927 - 22163.084: 89.2477% ( 83) 00:13:01.225 22163.084 - 22282.240: 90.1852% ( 81) 00:13:01.225 22282.240 - 22401.396: 91.1343% ( 82) 00:13:01.225 22401.396 - 22520.553: 92.0833% ( 82) 00:13:01.225 22520.553 - 22639.709: 92.9977% ( 79) 00:13:01.225 22639.709 - 22758.865: 93.9005% ( 78) 00:13:01.225 22758.865 - 22878.022: 94.7917% ( 77) 00:13:01.225 22878.022 - 22997.178: 95.5903% ( 69) 00:13:01.225 22997.178 - 23116.335: 96.2037% ( 53) 00:13:01.225 23116.335 - 23235.491: 96.6088% ( 35) 00:13:01.225 23235.491 - 23354.647: 96.9444% ( 29) 00:13:01.225 23354.647 - 23473.804: 97.2801% ( 29) 00:13:01.225 23473.804 - 23592.960: 97.5694% ( 25) 00:13:01.225 23592.960 - 23712.116: 97.8935% ( 28) 00:13:01.225 23712.116 - 23831.273: 98.1481% ( 22) 00:13:01.225 23831.273 - 23950.429: 98.4028% ( 22) 00:13:01.225 23950.429 - 24069.585: 98.4954% ( 8) 00:13:01.225 24069.585 - 24188.742: 98.5185% ( 2) 00:13:01.225 25499.462 - 25618.618: 98.5995% ( 7) 00:13:01.225 25618.618 - 25737.775: 98.6227% ( 2) 00:13:01.225 25737.775 - 25856.931: 98.6574% ( 3) 00:13:01.225 25856.931 - 25976.087: 98.7037% ( 4) 00:13:01.225 25976.087 - 26095.244: 98.7384% ( 3) 00:13:01.225 26095.244 - 26214.400: 98.7731% ( 3) 00:13:01.225 26214.400 - 26333.556: 98.8194% ( 4) 00:13:01.225 26333.556 - 26452.713: 98.8542% ( 3) 00:13:01.225 26452.713 - 26571.869: 98.8773% ( 2) 00:13:01.225 26571.869 - 26691.025: 98.9236% ( 4) 00:13:01.225 26691.025 - 26810.182: 98.9583% ( 3) 00:13:01.225 26810.182 - 26929.338: 98.9931% ( 3) 00:13:01.225 26929.338 - 27048.495: 99.0394% ( 4) 00:13:01.225 27048.495 - 27167.651: 99.0741% ( 3) 00:13:01.225 27167.651 - 27286.807: 99.1204% ( 4) 00:13:01.225 27286.807 - 27405.964: 99.1551% ( 3) 00:13:01.225 27405.964 - 27525.120: 99.1898% ( 3) 00:13:01.225 27525.120 - 27644.276: 99.2245% ( 3) 00:13:01.225 27644.276 - 27763.433: 99.2593% ( 3) 00:13:01.225 33363.782 - 33602.095: 99.3056% ( 4) 00:13:01.225 33602.095 - 33840.407: 99.3750% ( 6) 00:13:01.225 33840.407 - 34078.720: 99.4444% ( 6) 00:13:01.225 34078.720 - 34317.033: 99.5139% ( 6) 00:13:01.225 34317.033 - 34555.345: 99.5949% ( 7) 00:13:01.225 34555.345 - 34793.658: 99.6759% ( 7) 00:13:01.225 34793.658 - 35031.971: 99.7338% ( 5) 00:13:01.225 35031.971 - 35270.284: 99.8032% ( 6) 00:13:01.225 35270.284 - 35508.596: 99.8843% ( 7) 00:13:01.225 35508.596 - 35746.909: 99.9537% ( 6) 00:13:01.225 35746.909 - 35985.222: 100.0000% ( 4) 00:13:01.225 00:13:01.225 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:13:01.225 ============================================================================== 00:13:01.225 Range in us Cumulative IO count 00:13:01.225 9592.087 - 9651.665: 0.0579% ( 5) 00:13:01.225 9651.665 - 9711.244: 0.1157% ( 5) 00:13:01.225 9711.244 - 9770.822: 0.2199% ( 9) 00:13:01.225 9770.822 - 9830.400: 0.3472% ( 11) 00:13:01.225 9830.400 - 9889.978: 0.5208% ( 15) 00:13:01.225 9889.978 - 9949.556: 0.7407% ( 19) 00:13:01.225 9949.556 - 10009.135: 1.0648% ( 28) 00:13:01.225 10009.135 - 10068.713: 1.5394% ( 41) 00:13:01.225 10068.713 - 10128.291: 2.0602% ( 45) 00:13:01.225 10128.291 - 10187.869: 2.5694% ( 44) 00:13:01.225 10187.869 - 10247.447: 3.0556% ( 42) 00:13:01.225 10247.447 - 10307.025: 3.7847% ( 63) 00:13:01.225 10307.025 - 10366.604: 4.7801% ( 86) 00:13:01.225 10366.604 - 10426.182: 6.1690% ( 120) 00:13:01.225 10426.182 - 10485.760: 7.6505% ( 128) 00:13:01.225 10485.760 - 10545.338: 9.5949% ( 168) 00:13:01.225 10545.338 - 10604.916: 11.3773% ( 154) 00:13:01.225 10604.916 - 10664.495: 13.1134% ( 150) 00:13:01.225 10664.495 - 10724.073: 14.5486% ( 124) 00:13:01.225 10724.073 - 10783.651: 16.0995% ( 134) 00:13:01.225 10783.651 - 10843.229: 17.8009% ( 147) 00:13:01.225 10843.229 - 10902.807: 19.7454% ( 168) 00:13:01.225 10902.807 - 10962.385: 21.6319% ( 163) 00:13:01.225 10962.385 - 11021.964: 23.5301% ( 164) 00:13:01.225 11021.964 - 11081.542: 25.3588% ( 158) 00:13:01.225 11081.542 - 11141.120: 27.3727% ( 174) 00:13:01.225 11141.120 - 11200.698: 29.0972% ( 149) 00:13:01.225 11200.698 - 11260.276: 30.8218% ( 149) 00:13:01.225 11260.276 - 11319.855: 32.4653% ( 142) 00:13:01.225 11319.855 - 11379.433: 34.0394% ( 136) 00:13:01.225 11379.433 - 11439.011: 35.3356% ( 112) 00:13:01.225 11439.011 - 11498.589: 36.8287% ( 129) 00:13:01.225 11498.589 - 11558.167: 38.1134% ( 111) 00:13:01.225 11558.167 - 11617.745: 39.2477% ( 98) 00:13:01.225 11617.745 - 11677.324: 40.3588% ( 96) 00:13:01.225 11677.324 - 11736.902: 41.4236% ( 92) 00:13:01.225 11736.902 - 11796.480: 42.4884% ( 92) 00:13:01.225 11796.480 - 11856.058: 43.6458% ( 100) 00:13:01.225 11856.058 - 11915.636: 44.7569% ( 96) 00:13:01.225 11915.636 - 11975.215: 45.7060% ( 82) 00:13:01.225 11975.215 - 12034.793: 46.5625% ( 74) 00:13:01.225 12034.793 - 12094.371: 47.4769% ( 79) 00:13:01.225 12094.371 - 12153.949: 48.3102% ( 72) 00:13:01.225 12153.949 - 12213.527: 49.1782% ( 75) 00:13:01.225 12213.527 - 12273.105: 49.9884% ( 70) 00:13:01.225 12273.105 - 12332.684: 50.7755% ( 68) 00:13:01.225 12332.684 - 12392.262: 51.5394% ( 66) 00:13:01.225 12392.262 - 12451.840: 52.2917% ( 65) 00:13:01.225 12451.840 - 12511.418: 53.0093% ( 62) 00:13:01.225 12511.418 - 12570.996: 53.7269% ( 62) 00:13:01.225 12570.996 - 12630.575: 54.4097% ( 59) 00:13:01.225 12630.575 - 12690.153: 55.1042% ( 60) 00:13:01.225 12690.153 - 12749.731: 55.9838% ( 76) 00:13:01.225 12749.731 - 12809.309: 56.7014% ( 62) 00:13:01.225 12809.309 - 12868.887: 57.4421% ( 64) 00:13:01.225 12868.887 - 12928.465: 58.2407% ( 69) 00:13:01.225 12928.465 - 12988.044: 58.8426% ( 52) 00:13:01.225 12988.044 - 13047.622: 59.4907% ( 56) 00:13:01.225 13047.622 - 13107.200: 60.0926% ( 52) 00:13:01.225 13107.200 - 13166.778: 60.7292% ( 55) 00:13:01.225 13166.778 - 13226.356: 61.2847% ( 48) 00:13:01.225 13226.356 - 13285.935: 61.8750% ( 51) 00:13:01.225 13285.935 - 13345.513: 62.2569% ( 33) 00:13:01.225 13345.513 - 13405.091: 62.6505% ( 34) 00:13:01.225 13405.091 - 13464.669: 63.0671% ( 36) 00:13:01.225 13464.669 - 13524.247: 63.5417% ( 41) 00:13:01.225 13524.247 - 13583.825: 63.9699% ( 37) 00:13:01.225 13583.825 - 13643.404: 64.3519% ( 33) 00:13:01.225 13643.404 - 13702.982: 64.7454% ( 34) 00:13:01.225 13702.982 - 13762.560: 65.1389% ( 34) 00:13:01.225 13762.560 - 13822.138: 65.5208% ( 33) 00:13:01.226 13822.138 - 13881.716: 65.8912% ( 32) 00:13:01.226 13881.716 - 13941.295: 66.2384% ( 30) 00:13:01.226 13941.295 - 14000.873: 66.5509% ( 27) 00:13:01.226 14000.873 - 14060.451: 66.8056% ( 22) 00:13:01.226 14060.451 - 14120.029: 67.0255% ( 19) 00:13:01.226 14120.029 - 14179.607: 67.1991% ( 15) 00:13:01.226 14179.607 - 14239.185: 67.3148% ( 10) 00:13:01.226 14239.185 - 14298.764: 67.4306% ( 10) 00:13:01.226 14298.764 - 14358.342: 67.5463% ( 10) 00:13:01.226 14358.342 - 14417.920: 67.6620% ( 10) 00:13:01.226 14417.920 - 14477.498: 67.7431% ( 7) 00:13:01.226 14477.498 - 14537.076: 67.8125% ( 6) 00:13:01.226 14537.076 - 14596.655: 67.8704% ( 5) 00:13:01.226 14596.655 - 14656.233: 67.9051% ( 3) 00:13:01.226 14656.233 - 14715.811: 68.0324% ( 11) 00:13:01.226 14715.811 - 14775.389: 68.1019% ( 6) 00:13:01.226 14775.389 - 14834.967: 68.1481% ( 4) 00:13:01.226 14834.967 - 14894.545: 68.1944% ( 4) 00:13:01.226 14894.545 - 14954.124: 68.2407% ( 4) 00:13:01.226 14954.124 - 15013.702: 68.2986% ( 5) 00:13:01.226 15013.702 - 15073.280: 68.3449% ( 4) 00:13:01.226 15073.280 - 15132.858: 68.4028% ( 5) 00:13:01.226 15132.858 - 15192.436: 68.4606% ( 5) 00:13:01.226 15192.436 - 15252.015: 68.5185% ( 5) 00:13:01.226 15252.015 - 15371.171: 68.6227% ( 9) 00:13:01.226 15371.171 - 15490.327: 68.7269% ( 9) 00:13:01.226 15490.327 - 15609.484: 68.8310% ( 9) 00:13:01.226 15609.484 - 15728.640: 68.9468% ( 10) 00:13:01.226 15728.640 - 15847.796: 69.0625% ( 10) 00:13:01.226 15847.796 - 15966.953: 69.1667% ( 9) 00:13:01.226 15966.953 - 16086.109: 69.2361% ( 6) 00:13:01.226 16086.109 - 16205.265: 69.3056% ( 6) 00:13:01.226 16205.265 - 16324.422: 69.3866% ( 7) 00:13:01.226 16324.422 - 16443.578: 69.4560% ( 6) 00:13:01.226 16443.578 - 16562.735: 69.5370% ( 7) 00:13:01.226 16562.735 - 16681.891: 69.5949% ( 5) 00:13:01.226 16681.891 - 16801.047: 69.6296% ( 3) 00:13:01.226 17754.298 - 17873.455: 69.6412% ( 1) 00:13:01.226 18230.924 - 18350.080: 69.7106% ( 6) 00:13:01.226 18350.080 - 18469.236: 69.7801% ( 6) 00:13:01.226 18469.236 - 18588.393: 69.8264% ( 4) 00:13:01.226 18588.393 - 18707.549: 69.8727% ( 4) 00:13:01.226 18707.549 - 18826.705: 69.8958% ( 2) 00:13:01.226 18826.705 - 18945.862: 69.9190% ( 2) 00:13:01.226 18945.862 - 19065.018: 69.9306% ( 1) 00:13:01.226 19065.018 - 19184.175: 69.9537% ( 2) 00:13:01.226 19184.175 - 19303.331: 70.0231% ( 6) 00:13:01.226 19303.331 - 19422.487: 70.1042% ( 7) 00:13:01.226 19422.487 - 19541.644: 70.2199% ( 10) 00:13:01.226 19541.644 - 19660.800: 70.3472% ( 11) 00:13:01.226 19660.800 - 19779.956: 70.5903% ( 21) 00:13:01.226 19779.956 - 19899.113: 70.9722% ( 33) 00:13:01.226 19899.113 - 20018.269: 71.5856% ( 53) 00:13:01.226 20018.269 - 20137.425: 72.4074% ( 71) 00:13:01.226 20137.425 - 20256.582: 73.4375% ( 89) 00:13:01.226 20256.582 - 20375.738: 74.3866% ( 82) 00:13:01.226 20375.738 - 20494.895: 75.3356% ( 82) 00:13:01.226 20494.895 - 20614.051: 76.1921% ( 74) 00:13:01.226 20614.051 - 20733.207: 77.0949% ( 78) 00:13:01.226 20733.207 - 20852.364: 78.0787% ( 85) 00:13:01.226 20852.364 - 20971.520: 79.1898% ( 96) 00:13:01.226 20971.520 - 21090.676: 80.4398% ( 108) 00:13:01.226 21090.676 - 21209.833: 81.6088% ( 101) 00:13:01.226 21209.833 - 21328.989: 82.9051% ( 112) 00:13:01.226 21328.989 - 21448.145: 84.0741% ( 101) 00:13:01.226 21448.145 - 21567.302: 85.0579% ( 85) 00:13:01.226 21567.302 - 21686.458: 86.0417% ( 85) 00:13:01.226 21686.458 - 21805.615: 87.0370% ( 86) 00:13:01.226 21805.615 - 21924.771: 87.9167% ( 76) 00:13:01.226 21924.771 - 22043.927: 88.7847% ( 75) 00:13:01.226 22043.927 - 22163.084: 89.6759% ( 77) 00:13:01.226 22163.084 - 22282.240: 90.5440% ( 75) 00:13:01.226 22282.240 - 22401.396: 91.4583% ( 79) 00:13:01.226 22401.396 - 22520.553: 92.3380% ( 76) 00:13:01.226 22520.553 - 22639.709: 93.2060% ( 75) 00:13:01.226 22639.709 - 22758.865: 94.0509% ( 73) 00:13:01.226 22758.865 - 22878.022: 94.8843% ( 72) 00:13:01.226 22878.022 - 22997.178: 95.6019% ( 62) 00:13:01.226 22997.178 - 23116.335: 96.2153% ( 53) 00:13:01.226 23116.335 - 23235.491: 96.6319% ( 36) 00:13:01.226 23235.491 - 23354.647: 97.0255% ( 34) 00:13:01.226 23354.647 - 23473.804: 97.3727% ( 30) 00:13:01.226 23473.804 - 23592.960: 97.7315% ( 31) 00:13:01.226 23592.960 - 23712.116: 98.0556% ( 28) 00:13:01.226 23712.116 - 23831.273: 98.4144% ( 31) 00:13:01.226 23831.273 - 23950.429: 98.6806% ( 23) 00:13:01.226 23950.429 - 24069.585: 98.8426% ( 14) 00:13:01.226 24069.585 - 24188.742: 98.9005% ( 5) 00:13:01.226 24188.742 - 24307.898: 98.9352% ( 3) 00:13:01.226 24307.898 - 24427.055: 98.9699% ( 3) 00:13:01.226 24427.055 - 24546.211: 99.0046% ( 3) 00:13:01.226 24546.211 - 24665.367: 99.0394% ( 3) 00:13:01.226 24665.367 - 24784.524: 99.0741% ( 3) 00:13:01.226 24784.524 - 24903.680: 99.1088% ( 3) 00:13:01.226 24903.680 - 25022.836: 99.1435% ( 3) 00:13:01.226 25022.836 - 25141.993: 99.1782% ( 3) 00:13:01.226 25141.993 - 25261.149: 99.2245% ( 4) 00:13:01.226 25261.149 - 25380.305: 99.2593% ( 3) 00:13:01.227 30980.655 - 31218.967: 99.2940% ( 3) 00:13:01.227 31218.967 - 31457.280: 99.3750% ( 7) 00:13:01.227 31457.280 - 31695.593: 99.4444% ( 6) 00:13:01.227 31695.593 - 31933.905: 99.5255% ( 7) 00:13:01.227 31933.905 - 32172.218: 99.6065% ( 7) 00:13:01.227 32172.218 - 32410.531: 99.6875% ( 7) 00:13:01.227 32410.531 - 32648.844: 99.7569% ( 6) 00:13:01.227 32648.844 - 32887.156: 99.8380% ( 7) 00:13:01.227 32887.156 - 33125.469: 99.8958% ( 5) 00:13:01.227 33125.469 - 33363.782: 99.9537% ( 5) 00:13:01.227 33363.782 - 33602.095: 100.0000% ( 4) 00:13:01.227 00:13:01.227 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:13:01.227 ============================================================================== 00:13:01.227 Range in us Cumulative IO count 00:13:01.227 9592.087 - 9651.665: 0.1042% ( 9) 00:13:01.227 9651.665 - 9711.244: 0.2315% ( 11) 00:13:01.227 9711.244 - 9770.822: 0.3241% ( 8) 00:13:01.227 9770.822 - 9830.400: 0.4051% ( 7) 00:13:01.228 9830.400 - 9889.978: 0.4514% ( 4) 00:13:01.228 9889.978 - 9949.556: 0.5208% ( 6) 00:13:01.228 9949.556 - 10009.135: 0.6366% ( 10) 00:13:01.228 10009.135 - 10068.713: 0.9491% ( 27) 00:13:01.228 10068.713 - 10128.291: 1.5046% ( 48) 00:13:01.228 10128.291 - 10187.869: 2.2106% ( 61) 00:13:01.228 10187.869 - 10247.447: 3.0903% ( 76) 00:13:01.228 10247.447 - 10307.025: 4.2245% ( 98) 00:13:01.228 10307.025 - 10366.604: 5.3356% ( 96) 00:13:01.228 10366.604 - 10426.182: 6.6551% ( 114) 00:13:01.228 10426.182 - 10485.760: 7.9167% ( 109) 00:13:01.228 10485.760 - 10545.338: 9.4444% ( 132) 00:13:01.228 10545.338 - 10604.916: 11.1111% ( 144) 00:13:01.228 10604.916 - 10664.495: 12.6968% ( 137) 00:13:01.228 10664.495 - 10724.073: 14.4560% ( 152) 00:13:01.228 10724.073 - 10783.651: 16.2963% ( 159) 00:13:01.228 10783.651 - 10843.229: 18.1134% ( 157) 00:13:01.228 10843.229 - 10902.807: 20.2083% ( 181) 00:13:01.228 10902.807 - 10962.385: 22.1759% ( 170) 00:13:01.228 10962.385 - 11021.964: 23.9583% ( 154) 00:13:01.228 11021.964 - 11081.542: 25.9028% ( 168) 00:13:01.228 11081.542 - 11141.120: 27.8588% ( 169) 00:13:01.228 11141.120 - 11200.698: 29.7917% ( 167) 00:13:01.228 11200.698 - 11260.276: 31.4236% ( 141) 00:13:01.228 11260.276 - 11319.855: 33.0787% ( 143) 00:13:01.228 11319.855 - 11379.433: 34.6181% ( 133) 00:13:01.228 11379.433 - 11439.011: 36.0417% ( 123) 00:13:01.228 11439.011 - 11498.589: 37.3727% ( 115) 00:13:01.228 11498.589 - 11558.167: 38.8889% ( 131) 00:13:01.228 11558.167 - 11617.745: 39.9537% ( 92) 00:13:01.228 11617.745 - 11677.324: 41.0995% ( 99) 00:13:01.228 11677.324 - 11736.902: 42.1065% ( 87) 00:13:01.228 11736.902 - 11796.480: 43.0556% ( 82) 00:13:01.228 11796.480 - 11856.058: 43.9352% ( 76) 00:13:01.228 11856.058 - 11915.636: 44.9884% ( 91) 00:13:01.228 11915.636 - 11975.215: 46.0185% ( 89) 00:13:01.228 11975.215 - 12034.793: 46.9560% ( 81) 00:13:01.228 12034.793 - 12094.371: 47.8356% ( 76) 00:13:01.228 12094.371 - 12153.949: 48.6921% ( 74) 00:13:01.228 12153.949 - 12213.527: 49.5718% ( 76) 00:13:01.228 12213.527 - 12273.105: 50.3472% ( 67) 00:13:01.228 12273.105 - 12332.684: 51.2037% ( 74) 00:13:01.228 12332.684 - 12392.262: 52.0023% ( 69) 00:13:01.228 12392.262 - 12451.840: 52.7662% ( 66) 00:13:01.228 12451.840 - 12511.418: 53.5185% ( 65) 00:13:01.228 12511.418 - 12570.996: 54.2477% ( 63) 00:13:01.228 12570.996 - 12630.575: 55.0347% ( 68) 00:13:01.228 12630.575 - 12690.153: 55.7755% ( 64) 00:13:01.228 12690.153 - 12749.731: 56.5625% ( 68) 00:13:01.228 12749.731 - 12809.309: 57.2801% ( 62) 00:13:01.228 12809.309 - 12868.887: 58.0556% ( 67) 00:13:01.228 12868.887 - 12928.465: 58.6921% ( 55) 00:13:01.228 12928.465 - 12988.044: 59.3287% ( 55) 00:13:01.228 12988.044 - 13047.622: 60.0000% ( 58) 00:13:01.228 13047.622 - 13107.200: 60.6019% ( 52) 00:13:01.228 13107.200 - 13166.778: 61.1921% ( 51) 00:13:01.228 13166.778 - 13226.356: 61.6898% ( 43) 00:13:01.228 13226.356 - 13285.935: 62.1528% ( 40) 00:13:01.228 13285.935 - 13345.513: 62.5694% ( 36) 00:13:01.228 13345.513 - 13405.091: 62.9861% ( 36) 00:13:01.228 13405.091 - 13464.669: 63.4491% ( 40) 00:13:01.228 13464.669 - 13524.247: 63.8889% ( 38) 00:13:01.228 13524.247 - 13583.825: 64.2824% ( 34) 00:13:01.228 13583.825 - 13643.404: 64.6991% ( 36) 00:13:01.228 13643.404 - 13702.982: 65.0926% ( 34) 00:13:01.228 13702.982 - 13762.560: 65.4745% ( 33) 00:13:01.228 13762.560 - 13822.138: 65.8102% ( 29) 00:13:01.228 13822.138 - 13881.716: 66.0764% ( 23) 00:13:01.228 13881.716 - 13941.295: 66.3773% ( 26) 00:13:01.228 13941.295 - 14000.873: 66.6898% ( 27) 00:13:01.228 14000.873 - 14060.451: 67.0139% ( 28) 00:13:01.228 14060.451 - 14120.029: 67.2801% ( 23) 00:13:01.228 14120.029 - 14179.607: 67.4653% ( 16) 00:13:01.228 14179.607 - 14239.185: 67.5694% ( 9) 00:13:01.228 14239.185 - 14298.764: 67.6273% ( 5) 00:13:01.228 14298.764 - 14358.342: 67.6620% ( 3) 00:13:01.228 14358.342 - 14417.920: 67.6968% ( 3) 00:13:01.228 14417.920 - 14477.498: 67.7315% ( 3) 00:13:01.228 14477.498 - 14537.076: 67.7662% ( 3) 00:13:01.228 14537.076 - 14596.655: 67.7894% ( 2) 00:13:01.228 14596.655 - 14656.233: 67.8125% ( 2) 00:13:01.228 14656.233 - 14715.811: 67.8241% ( 1) 00:13:01.228 14715.811 - 14775.389: 67.8356% ( 1) 00:13:01.228 14775.389 - 14834.967: 67.8819% ( 4) 00:13:01.228 14834.967 - 14894.545: 67.9282% ( 4) 00:13:01.228 14894.545 - 14954.124: 67.9861% ( 5) 00:13:01.228 14954.124 - 15013.702: 68.0208% ( 3) 00:13:01.228 15013.702 - 15073.280: 68.0556% ( 3) 00:13:01.228 15073.280 - 15132.858: 68.1134% ( 5) 00:13:01.228 15132.858 - 15192.436: 68.1829% ( 6) 00:13:01.228 15192.436 - 15252.015: 68.2292% ( 4) 00:13:01.228 15252.015 - 15371.171: 68.3565% ( 11) 00:13:01.228 15371.171 - 15490.327: 68.4722% ( 10) 00:13:01.228 15490.327 - 15609.484: 68.5995% ( 11) 00:13:01.228 15609.484 - 15728.640: 68.7037% ( 9) 00:13:01.228 15728.640 - 15847.796: 68.7731% ( 6) 00:13:01.228 15847.796 - 15966.953: 68.8426% ( 6) 00:13:01.228 15966.953 - 16086.109: 68.9120% ( 6) 00:13:01.228 16086.109 - 16205.265: 68.9815% ( 6) 00:13:01.228 16205.265 - 16324.422: 69.0625% ( 7) 00:13:01.228 16324.422 - 16443.578: 69.1319% ( 6) 00:13:01.228 16443.578 - 16562.735: 69.2130% ( 7) 00:13:01.228 16562.735 - 16681.891: 69.2824% ( 6) 00:13:01.228 16681.891 - 16801.047: 69.3750% ( 8) 00:13:01.228 16801.047 - 16920.204: 69.4213% ( 4) 00:13:01.228 16920.204 - 17039.360: 69.4560% ( 3) 00:13:01.228 17039.360 - 17158.516: 69.4907% ( 3) 00:13:01.228 17158.516 - 17277.673: 69.5255% ( 3) 00:13:01.228 17277.673 - 17396.829: 69.5602% ( 3) 00:13:01.228 17396.829 - 17515.985: 69.5949% ( 3) 00:13:01.228 17515.985 - 17635.142: 69.6296% ( 3) 00:13:01.228 17754.298 - 17873.455: 69.6412% ( 1) 00:13:01.228 17992.611 - 18111.767: 69.6875% ( 4) 00:13:01.228 18111.767 - 18230.924: 69.7454% ( 5) 00:13:01.228 18230.924 - 18350.080: 69.8148% ( 6) 00:13:01.228 18350.080 - 18469.236: 69.8958% ( 7) 00:13:01.228 18469.236 - 18588.393: 69.9074% ( 1) 00:13:01.228 18588.393 - 18707.549: 69.9306% ( 2) 00:13:01.228 18707.549 - 18826.705: 69.9537% ( 2) 00:13:01.228 18826.705 - 18945.862: 69.9769% ( 2) 00:13:01.228 18945.862 - 19065.018: 70.0926% ( 10) 00:13:01.228 19065.018 - 19184.175: 70.1968% ( 9) 00:13:01.228 19184.175 - 19303.331: 70.2546% ( 5) 00:13:01.228 19303.331 - 19422.487: 70.3125% ( 5) 00:13:01.228 19422.487 - 19541.644: 70.3588% ( 4) 00:13:01.228 19541.644 - 19660.800: 70.4630% ( 9) 00:13:01.228 19660.800 - 19779.956: 70.6366% ( 15) 00:13:01.228 19779.956 - 19899.113: 71.0301% ( 34) 00:13:01.228 19899.113 - 20018.269: 71.6782% ( 56) 00:13:01.228 20018.269 - 20137.425: 72.4190% ( 64) 00:13:01.228 20137.425 - 20256.582: 73.2870% ( 75) 00:13:01.228 20256.582 - 20375.738: 74.2130% ( 80) 00:13:01.228 20375.738 - 20494.895: 75.1968% ( 85) 00:13:01.228 20494.895 - 20614.051: 76.1806% ( 85) 00:13:01.228 20614.051 - 20733.207: 77.0255% ( 73) 00:13:01.228 20733.207 - 20852.364: 78.1134% ( 94) 00:13:01.228 20852.364 - 20971.520: 79.1782% ( 92) 00:13:01.228 20971.520 - 21090.676: 80.2546% ( 93) 00:13:01.228 21090.676 - 21209.833: 81.4468% ( 103) 00:13:01.228 21209.833 - 21328.989: 82.6505% ( 104) 00:13:01.228 21328.989 - 21448.145: 83.9352% ( 111) 00:13:01.228 21448.145 - 21567.302: 85.0694% ( 98) 00:13:01.228 21567.302 - 21686.458: 86.0185% ( 82) 00:13:01.228 21686.458 - 21805.615: 87.0023% ( 85) 00:13:01.228 21805.615 - 21924.771: 87.9282% ( 80) 00:13:01.228 21924.771 - 22043.927: 88.9120% ( 85) 00:13:01.228 22043.927 - 22163.084: 89.8148% ( 78) 00:13:01.228 22163.084 - 22282.240: 90.7407% ( 80) 00:13:01.228 22282.240 - 22401.396: 91.7477% ( 87) 00:13:01.228 22401.396 - 22520.553: 92.6620% ( 79) 00:13:01.228 22520.553 - 22639.709: 93.5764% ( 79) 00:13:01.228 22639.709 - 22758.865: 94.4444% ( 75) 00:13:01.228 22758.865 - 22878.022: 95.3125% ( 75) 00:13:01.228 22878.022 - 22997.178: 95.9954% ( 59) 00:13:01.228 22997.178 - 23116.335: 96.5509% ( 48) 00:13:01.228 23116.335 - 23235.491: 96.9792% ( 37) 00:13:01.228 23235.491 - 23354.647: 97.3264% ( 30) 00:13:01.228 23354.647 - 23473.804: 97.6620% ( 29) 00:13:01.228 23473.804 - 23592.960: 97.9977% ( 29) 00:13:01.228 23592.960 - 23712.116: 98.2870% ( 25) 00:13:01.228 23712.116 - 23831.273: 98.5764% ( 25) 00:13:01.228 23831.273 - 23950.429: 98.7963% ( 19) 00:13:01.228 23950.429 - 24069.585: 98.8657% ( 6) 00:13:01.228 24069.585 - 24188.742: 98.8889% ( 2) 00:13:01.228 25380.305 - 25499.462: 98.9005% ( 1) 00:13:01.228 25499.462 - 25618.618: 98.9352% ( 3) 00:13:01.228 25618.618 - 25737.775: 98.9815% ( 4) 00:13:01.228 25737.775 - 25856.931: 99.0162% ( 3) 00:13:01.228 25856.931 - 25976.087: 99.0509% ( 3) 00:13:01.228 25976.087 - 26095.244: 99.0972% ( 4) 00:13:01.228 26095.244 - 26214.400: 99.1435% ( 4) 00:13:01.228 26214.400 - 26333.556: 99.1898% ( 4) 00:13:01.228 26333.556 - 26452.713: 99.2245% ( 3) 00:13:01.228 26452.713 - 26571.869: 99.2593% ( 3) 00:13:01.228 28716.684 - 28835.840: 99.2824% ( 2) 00:13:01.228 28835.840 - 28954.996: 99.3287% ( 4) 00:13:01.228 28954.996 - 29074.153: 99.3634% ( 3) 00:13:01.228 29074.153 - 29193.309: 99.3981% ( 3) 00:13:01.228 29193.309 - 29312.465: 99.4329% ( 3) 00:13:01.228 29312.465 - 29431.622: 99.4560% ( 2) 00:13:01.228 29431.622 - 29550.778: 99.4907% ( 3) 00:13:01.228 29550.778 - 29669.935: 99.5255% ( 3) 00:13:01.228 29669.935 - 29789.091: 99.5602% ( 3) 00:13:01.228 29789.091 - 29908.247: 99.5949% ( 3) 00:13:01.228 29908.247 - 30027.404: 99.6296% ( 3) 00:13:01.228 30027.404 - 30146.560: 99.6528% ( 2) 00:13:01.228 30146.560 - 30265.716: 99.6991% ( 4) 00:13:01.228 30265.716 - 30384.873: 99.7338% ( 3) 00:13:01.228 30384.873 - 30504.029: 99.7685% ( 3) 00:13:01.228 30504.029 - 30742.342: 99.8380% ( 6) 00:13:01.228 30742.342 - 30980.655: 99.9190% ( 7) 00:13:01.228 30980.655 - 31218.967: 99.9884% ( 6) 00:13:01.228 31218.967 - 31457.280: 100.0000% ( 1) 00:13:01.228 00:13:01.228 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:13:01.228 ============================================================================== 00:13:01.228 Range in us Cumulative IO count 00:13:01.228 9711.244 - 9770.822: 0.0231% ( 2) 00:13:01.228 9770.822 - 9830.400: 0.1157% ( 8) 00:13:01.228 9830.400 - 9889.978: 0.2546% ( 12) 00:13:01.228 9889.978 - 9949.556: 0.4051% ( 13) 00:13:01.228 9949.556 - 10009.135: 0.5324% ( 11) 00:13:01.228 10009.135 - 10068.713: 0.9144% ( 33) 00:13:01.228 10068.713 - 10128.291: 1.6435% ( 63) 00:13:01.228 10128.291 - 10187.869: 2.3032% ( 57) 00:13:01.228 10187.869 - 10247.447: 3.1250% ( 71) 00:13:01.228 10247.447 - 10307.025: 3.9815% ( 74) 00:13:01.228 10307.025 - 10366.604: 4.9884% ( 87) 00:13:01.228 10366.604 - 10426.182: 6.1458% ( 100) 00:13:01.228 10426.182 - 10485.760: 7.6042% ( 126) 00:13:01.228 10485.760 - 10545.338: 9.0046% ( 121) 00:13:01.228 10545.338 - 10604.916: 10.7755% ( 153) 00:13:01.228 10604.916 - 10664.495: 12.3264% ( 134) 00:13:01.228 10664.495 - 10724.073: 13.9005% ( 136) 00:13:01.228 10724.073 - 10783.651: 15.7292% ( 158) 00:13:01.229 10783.651 - 10843.229: 17.8704% ( 185) 00:13:01.229 10843.229 - 10902.807: 19.8264% ( 169) 00:13:01.229 10902.807 - 10962.385: 21.7940% ( 170) 00:13:01.229 10962.385 - 11021.964: 24.2130% ( 209) 00:13:01.229 11021.964 - 11081.542: 26.2153% ( 173) 00:13:01.229 11081.542 - 11141.120: 28.0787% ( 161) 00:13:01.229 11141.120 - 11200.698: 29.7917% ( 148) 00:13:01.229 11200.698 - 11260.276: 31.3889% ( 138) 00:13:01.229 11260.276 - 11319.855: 32.9051% ( 131) 00:13:01.229 11319.855 - 11379.433: 34.3403% ( 124) 00:13:01.229 11379.433 - 11439.011: 35.6019% ( 109) 00:13:01.229 11439.011 - 11498.589: 36.9560% ( 117) 00:13:01.229 11498.589 - 11558.167: 38.3565% ( 121) 00:13:01.229 11558.167 - 11617.745: 39.7917% ( 124) 00:13:01.229 11617.745 - 11677.324: 40.9838% ( 103) 00:13:01.229 11677.324 - 11736.902: 42.1065% ( 97) 00:13:01.229 11736.902 - 11796.480: 43.1944% ( 94) 00:13:01.229 11796.480 - 11856.058: 44.2708% ( 93) 00:13:01.229 11856.058 - 11915.636: 45.1042% ( 72) 00:13:01.229 11915.636 - 11975.215: 45.9606% ( 74) 00:13:01.229 11975.215 - 12034.793: 46.8866% ( 80) 00:13:01.229 12034.793 - 12094.371: 47.7778% ( 77) 00:13:01.229 12094.371 - 12153.949: 48.6111% ( 72) 00:13:01.229 12153.949 - 12213.527: 49.3750% ( 66) 00:13:01.229 12213.527 - 12273.105: 50.3009% ( 80) 00:13:01.229 12273.105 - 12332.684: 51.2616% ( 83) 00:13:01.229 12332.684 - 12392.262: 52.1644% ( 78) 00:13:01.229 12392.262 - 12451.840: 52.9861% ( 71) 00:13:01.229 12451.840 - 12511.418: 53.7847% ( 69) 00:13:01.229 12511.418 - 12570.996: 54.5370% ( 65) 00:13:01.229 12570.996 - 12630.575: 55.3356% ( 69) 00:13:01.229 12630.575 - 12690.153: 56.0995% ( 66) 00:13:01.229 12690.153 - 12749.731: 56.9560% ( 74) 00:13:01.229 12749.731 - 12809.309: 57.7083% ( 65) 00:13:01.229 12809.309 - 12868.887: 58.4606% ( 65) 00:13:01.229 12868.887 - 12928.465: 59.1551% ( 60) 00:13:01.229 12928.465 - 12988.044: 59.6991% ( 47) 00:13:01.229 12988.044 - 13047.622: 60.3125% ( 53) 00:13:01.229 13047.622 - 13107.200: 60.9259% ( 53) 00:13:01.229 13107.200 - 13166.778: 61.5856% ( 57) 00:13:01.229 13166.778 - 13226.356: 62.0833% ( 43) 00:13:01.229 13226.356 - 13285.935: 62.5810% ( 43) 00:13:01.229 13285.935 - 13345.513: 63.0440% ( 40) 00:13:01.229 13345.513 - 13405.091: 63.5185% ( 41) 00:13:01.229 13405.091 - 13464.669: 63.9815% ( 40) 00:13:01.229 13464.669 - 13524.247: 64.4444% ( 40) 00:13:01.229 13524.247 - 13583.825: 64.8611% ( 36) 00:13:01.229 13583.825 - 13643.404: 65.2662% ( 35) 00:13:01.229 13643.404 - 13702.982: 65.6713% ( 35) 00:13:01.229 13702.982 - 13762.560: 66.0764% ( 35) 00:13:01.229 13762.560 - 13822.138: 66.4352% ( 31) 00:13:01.229 13822.138 - 13881.716: 66.8171% ( 33) 00:13:01.229 13881.716 - 13941.295: 67.1412% ( 28) 00:13:01.229 13941.295 - 14000.873: 67.4074% ( 23) 00:13:01.229 14000.873 - 14060.451: 67.6620% ( 22) 00:13:01.229 14060.451 - 14120.029: 67.8704% ( 18) 00:13:01.229 14120.029 - 14179.607: 68.0324% ( 14) 00:13:01.229 14179.607 - 14239.185: 68.0903% ( 5) 00:13:01.229 14239.185 - 14298.764: 68.1250% ( 3) 00:13:01.229 14298.764 - 14358.342: 68.1481% ( 2) 00:13:01.229 15609.484 - 15728.640: 68.1597% ( 1) 00:13:01.229 15728.640 - 15847.796: 68.2060% ( 4) 00:13:01.229 15847.796 - 15966.953: 68.2639% ( 5) 00:13:01.229 15966.953 - 16086.109: 68.3449% ( 7) 00:13:01.229 16086.109 - 16205.265: 68.4259% ( 7) 00:13:01.229 16205.265 - 16324.422: 68.5185% ( 8) 00:13:01.229 16324.422 - 16443.578: 68.5995% ( 7) 00:13:01.229 16443.578 - 16562.735: 68.6921% ( 8) 00:13:01.229 16562.735 - 16681.891: 68.7731% ( 7) 00:13:01.229 16681.891 - 16801.047: 68.8426% ( 6) 00:13:01.229 16801.047 - 16920.204: 68.9352% ( 8) 00:13:01.229 16920.204 - 17039.360: 69.0046% ( 6) 00:13:01.229 17039.360 - 17158.516: 69.0625% ( 5) 00:13:01.229 17158.516 - 17277.673: 69.1319% ( 6) 00:13:01.229 17277.673 - 17396.829: 69.2014% ( 6) 00:13:01.229 17396.829 - 17515.985: 69.2708% ( 6) 00:13:01.229 17515.985 - 17635.142: 69.3403% ( 6) 00:13:01.229 17635.142 - 17754.298: 69.4213% ( 7) 00:13:01.229 17754.298 - 17873.455: 69.4792% ( 5) 00:13:01.229 17873.455 - 17992.611: 69.6875% ( 18) 00:13:01.229 17992.611 - 18111.767: 69.8380% ( 13) 00:13:01.229 18111.767 - 18230.924: 69.9769% ( 12) 00:13:01.229 18230.924 - 18350.080: 70.1157% ( 12) 00:13:01.229 18350.080 - 18469.236: 70.2546% ( 12) 00:13:01.229 18469.236 - 18588.393: 70.3704% ( 10) 00:13:01.229 18588.393 - 18707.549: 70.4514% ( 7) 00:13:01.229 18707.549 - 18826.705: 70.5556% ( 9) 00:13:01.229 18826.705 - 18945.862: 70.6481% ( 8) 00:13:01.229 18945.862 - 19065.018: 70.7639% ( 10) 00:13:01.229 19065.018 - 19184.175: 70.8565% ( 8) 00:13:01.229 19184.175 - 19303.331: 70.9491% ( 8) 00:13:01.229 19303.331 - 19422.487: 71.0532% ( 9) 00:13:01.229 19422.487 - 19541.644: 71.1690% ( 10) 00:13:01.229 19541.644 - 19660.800: 71.3310% ( 14) 00:13:01.229 19660.800 - 19779.956: 71.5509% ( 19) 00:13:01.229 19779.956 - 19899.113: 71.9676% ( 36) 00:13:01.229 19899.113 - 20018.269: 72.5810% ( 53) 00:13:01.229 20018.269 - 20137.425: 73.3333% ( 65) 00:13:01.229 20137.425 - 20256.582: 74.2245% ( 77) 00:13:01.229 20256.582 - 20375.738: 75.0463% ( 71) 00:13:01.229 20375.738 - 20494.895: 75.9259% ( 76) 00:13:01.229 20494.895 - 20614.051: 76.6435% ( 62) 00:13:01.229 20614.051 - 20733.207: 77.3611% ( 62) 00:13:01.229 20733.207 - 20852.364: 78.3681% ( 87) 00:13:01.229 20852.364 - 20971.520: 79.2593% ( 77) 00:13:01.229 20971.520 - 21090.676: 80.3704% ( 96) 00:13:01.229 21090.676 - 21209.833: 81.5625% ( 103) 00:13:01.229 21209.833 - 21328.989: 82.6736% ( 96) 00:13:01.229 21328.989 - 21448.145: 83.7731% ( 95) 00:13:01.229 21448.145 - 21567.302: 84.7917% ( 88) 00:13:01.229 21567.302 - 21686.458: 85.7407% ( 82) 00:13:01.229 21686.458 - 21805.615: 86.6782% ( 81) 00:13:01.229 21805.615 - 21924.771: 87.5347% ( 74) 00:13:01.229 21924.771 - 22043.927: 88.4606% ( 80) 00:13:01.229 22043.927 - 22163.084: 89.3750% ( 79) 00:13:01.229 22163.084 - 22282.240: 90.2662% ( 77) 00:13:01.229 22282.240 - 22401.396: 91.1921% ( 80) 00:13:01.229 22401.396 - 22520.553: 92.0718% ( 76) 00:13:01.229 22520.553 - 22639.709: 93.0093% ( 81) 00:13:01.229 22639.709 - 22758.865: 93.8310% ( 71) 00:13:01.229 22758.865 - 22878.022: 94.6528% ( 71) 00:13:01.229 22878.022 - 22997.178: 95.4167% ( 66) 00:13:01.229 22997.178 - 23116.335: 95.9722% ( 48) 00:13:01.229 23116.335 - 23235.491: 96.4583% ( 42) 00:13:01.229 23235.491 - 23354.647: 96.9444% ( 42) 00:13:01.229 23354.647 - 23473.804: 97.3611% ( 36) 00:13:01.229 23473.804 - 23592.960: 97.7083% ( 30) 00:13:01.229 23592.960 - 23712.116: 98.0440% ( 29) 00:13:01.229 23712.116 - 23831.273: 98.3681% ( 28) 00:13:01.229 23831.273 - 23950.429: 98.6574% ( 25) 00:13:01.229 23950.429 - 24069.585: 98.7847% ( 11) 00:13:01.229 24069.585 - 24188.742: 98.8542% ( 6) 00:13:01.229 24188.742 - 24307.898: 98.8889% ( 3) 00:13:01.229 24307.898 - 24427.055: 98.9352% ( 4) 00:13:01.229 24427.055 - 24546.211: 98.9815% ( 4) 00:13:01.229 24546.211 - 24665.367: 99.0162% ( 3) 00:13:01.229 24665.367 - 24784.524: 99.0625% ( 4) 00:13:01.229 24784.524 - 24903.680: 99.1088% ( 4) 00:13:01.229 24903.680 - 25022.836: 99.1435% ( 3) 00:13:01.229 25022.836 - 25141.993: 99.1898% ( 4) 00:13:01.229 25141.993 - 25261.149: 99.2245% ( 3) 00:13:01.229 25261.149 - 25380.305: 99.2593% ( 3) 00:13:01.229 26333.556 - 26452.713: 99.2940% ( 3) 00:13:01.229 26452.713 - 26571.869: 99.3287% ( 3) 00:13:01.229 26571.869 - 26691.025: 99.3634% ( 3) 00:13:01.229 26691.025 - 26810.182: 99.3981% ( 3) 00:13:01.229 26810.182 - 26929.338: 99.4329% ( 3) 00:13:01.229 26929.338 - 27048.495: 99.4676% ( 3) 00:13:01.229 27048.495 - 27167.651: 99.5023% ( 3) 00:13:01.229 27167.651 - 27286.807: 99.5370% ( 3) 00:13:01.229 27286.807 - 27405.964: 99.5718% ( 3) 00:13:01.229 27405.964 - 27525.120: 99.6181% ( 4) 00:13:01.229 27525.120 - 27644.276: 99.6528% ( 3) 00:13:01.229 27644.276 - 27763.433: 99.6875% ( 3) 00:13:01.229 27763.433 - 27882.589: 99.7222% ( 3) 00:13:01.229 27882.589 - 28001.745: 99.7569% ( 3) 00:13:01.229 28001.745 - 28120.902: 99.7917% ( 3) 00:13:01.229 28120.902 - 28240.058: 99.8264% ( 3) 00:13:01.229 28240.058 - 28359.215: 99.8611% ( 3) 00:13:01.229 28359.215 - 28478.371: 99.9074% ( 4) 00:13:01.229 28478.371 - 28597.527: 99.9421% ( 3) 00:13:01.229 28597.527 - 28716.684: 99.9769% ( 3) 00:13:01.229 28716.684 - 28835.840: 100.0000% ( 2) 00:13:01.229 00:13:01.229 ************************************ 00:13:01.229 END TEST nvme_perf 00:13:01.229 ************************************ 00:13:01.229 13:54:25 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:13:01.229 00:13:01.229 real 0m2.877s 00:13:01.229 user 0m2.419s 00:13:01.229 sys 0m0.322s 00:13:01.229 13:54:25 nvme.nvme_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:01.229 13:54:25 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:13:01.229 13:54:25 nvme -- common/autotest_common.sh@1142 -- # return 0 00:13:01.229 13:54:25 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:13:01.229 13:54:25 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:01.229 13:54:25 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:01.229 13:54:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:01.229 ************************************ 00:13:01.229 START TEST nvme_hello_world 00:13:01.229 ************************************ 00:13:01.229 13:54:25 nvme.nvme_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:13:01.487 Initializing NVMe Controllers 00:13:01.487 Attached to 0000:00:10.0 00:13:01.487 Namespace ID: 1 size: 6GB 00:13:01.487 Attached to 0000:00:11.0 00:13:01.487 Namespace ID: 1 size: 5GB 00:13:01.487 Attached to 0000:00:13.0 00:13:01.487 Namespace ID: 1 size: 1GB 00:13:01.487 Attached to 0000:00:12.0 00:13:01.487 Namespace ID: 1 size: 4GB 00:13:01.487 Namespace ID: 2 size: 4GB 00:13:01.487 Namespace ID: 3 size: 4GB 00:13:01.487 Initialization complete. 00:13:01.488 INFO: using host memory buffer for IO 00:13:01.488 Hello world! 00:13:01.488 INFO: using host memory buffer for IO 00:13:01.488 Hello world! 00:13:01.488 INFO: using host memory buffer for IO 00:13:01.488 Hello world! 00:13:01.488 INFO: using host memory buffer for IO 00:13:01.488 Hello world! 00:13:01.488 INFO: using host memory buffer for IO 00:13:01.488 Hello world! 00:13:01.488 INFO: using host memory buffer for IO 00:13:01.488 Hello world! 00:13:01.745 00:13:01.745 real 0m0.312s 00:13:01.745 user 0m0.131s 00:13:01.745 sys 0m0.133s 00:13:01.745 13:54:26 nvme.nvme_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:01.745 ************************************ 00:13:01.745 END TEST nvme_hello_world 00:13:01.745 ************************************ 00:13:01.745 13:54:26 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:01.745 13:54:26 nvme -- common/autotest_common.sh@1142 -- # return 0 00:13:01.745 13:54:26 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:13:01.745 13:54:26 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:01.745 13:54:26 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:01.745 13:54:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:01.745 ************************************ 00:13:01.745 START TEST nvme_sgl 00:13:01.745 ************************************ 00:13:01.745 13:54:26 nvme.nvme_sgl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:13:02.003 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:13:02.003 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:13:02.003 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:13:02.003 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:13:02.003 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:13:02.003 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:13:02.003 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:13:02.003 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:13:02.003 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:13:02.003 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:13:02.003 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:13:02.003 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:13:02.003 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:13:02.003 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:13:02.003 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:13:02.003 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:13:02.003 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:13:02.003 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:13:02.003 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:13:02.003 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:13:02.003 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:13:02.003 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:13:02.003 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:13:02.003 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:13:02.003 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:13:02.003 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:13:02.003 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:13:02.003 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:13:02.003 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:13:02.003 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:13:02.003 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:13:02.003 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:13:02.003 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:13:02.003 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:13:02.003 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:13:02.003 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:13:02.003 NVMe Readv/Writev Request test 00:13:02.003 Attached to 0000:00:10.0 00:13:02.003 Attached to 0000:00:11.0 00:13:02.003 Attached to 0000:00:13.0 00:13:02.003 Attached to 0000:00:12.0 00:13:02.003 0000:00:10.0: build_io_request_2 test passed 00:13:02.003 0000:00:10.0: build_io_request_4 test passed 00:13:02.003 0000:00:10.0: build_io_request_5 test passed 00:13:02.003 0000:00:10.0: build_io_request_6 test passed 00:13:02.003 0000:00:10.0: build_io_request_7 test passed 00:13:02.003 0000:00:10.0: build_io_request_10 test passed 00:13:02.003 0000:00:11.0: build_io_request_2 test passed 00:13:02.003 0000:00:11.0: build_io_request_4 test passed 00:13:02.003 0000:00:11.0: build_io_request_5 test passed 00:13:02.003 0000:00:11.0: build_io_request_6 test passed 00:13:02.003 0000:00:11.0: build_io_request_7 test passed 00:13:02.003 0000:00:11.0: build_io_request_10 test passed 00:13:02.003 Cleaning up... 00:13:02.003 ************************************ 00:13:02.003 END TEST nvme_sgl 00:13:02.003 ************************************ 00:13:02.003 00:13:02.003 real 0m0.350s 00:13:02.003 user 0m0.175s 00:13:02.003 sys 0m0.130s 00:13:02.003 13:54:26 nvme.nvme_sgl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:02.003 13:54:26 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:13:02.003 13:54:26 nvme -- common/autotest_common.sh@1142 -- # return 0 00:13:02.003 13:54:26 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:13:02.003 13:54:26 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:02.003 13:54:26 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:02.003 13:54:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:02.003 ************************************ 00:13:02.003 START TEST nvme_e2edp 00:13:02.003 ************************************ 00:13:02.003 13:54:26 nvme.nvme_e2edp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:13:02.260 NVMe Write/Read with End-to-End data protection test 00:13:02.260 Attached to 0000:00:10.0 00:13:02.260 Attached to 0000:00:11.0 00:13:02.260 Attached to 0000:00:13.0 00:13:02.260 Attached to 0000:00:12.0 00:13:02.260 Cleaning up... 00:13:02.260 00:13:02.260 real 0m0.278s 00:13:02.260 user 0m0.104s 00:13:02.260 sys 0m0.133s 00:13:02.260 13:54:26 nvme.nvme_e2edp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:02.260 13:54:26 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:13:02.260 ************************************ 00:13:02.260 END TEST nvme_e2edp 00:13:02.260 ************************************ 00:13:02.260 13:54:26 nvme -- common/autotest_common.sh@1142 -- # return 0 00:13:02.260 13:54:26 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:13:02.260 13:54:26 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:02.260 13:54:26 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:02.260 13:54:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:02.518 ************************************ 00:13:02.518 START TEST nvme_reserve 00:13:02.518 ************************************ 00:13:02.518 13:54:26 nvme.nvme_reserve -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:13:02.776 ===================================================== 00:13:02.776 NVMe Controller at PCI bus 0, device 16, function 0 00:13:02.776 ===================================================== 00:13:02.776 Reservations: Not Supported 00:13:02.776 ===================================================== 00:13:02.776 NVMe Controller at PCI bus 0, device 17, function 0 00:13:02.776 ===================================================== 00:13:02.776 Reservations: Not Supported 00:13:02.776 ===================================================== 00:13:02.776 NVMe Controller at PCI bus 0, device 19, function 0 00:13:02.776 ===================================================== 00:13:02.776 Reservations: Not Supported 00:13:02.776 ===================================================== 00:13:02.776 NVMe Controller at PCI bus 0, device 18, function 0 00:13:02.776 ===================================================== 00:13:02.776 Reservations: Not Supported 00:13:02.776 Reservation test passed 00:13:02.776 ************************************ 00:13:02.776 END TEST nvme_reserve 00:13:02.776 ************************************ 00:13:02.776 00:13:02.776 real 0m0.275s 00:13:02.776 user 0m0.104s 00:13:02.776 sys 0m0.131s 00:13:02.776 13:54:27 nvme.nvme_reserve -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:02.776 13:54:27 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:13:02.776 13:54:27 nvme -- common/autotest_common.sh@1142 -- # return 0 00:13:02.776 13:54:27 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:13:02.776 13:54:27 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:02.776 13:54:27 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:02.776 13:54:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:02.776 ************************************ 00:13:02.776 START TEST nvme_err_injection 00:13:02.776 ************************************ 00:13:02.776 13:54:27 nvme.nvme_err_injection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:13:03.059 NVMe Error Injection test 00:13:03.059 Attached to 0000:00:10.0 00:13:03.059 Attached to 0000:00:11.0 00:13:03.059 Attached to 0000:00:13.0 00:13:03.059 Attached to 0000:00:12.0 00:13:03.059 0000:00:10.0: get features failed as expected 00:13:03.059 0000:00:11.0: get features failed as expected 00:13:03.059 0000:00:13.0: get features failed as expected 00:13:03.059 0000:00:12.0: get features failed as expected 00:13:03.059 0000:00:10.0: get features successfully as expected 00:13:03.059 0000:00:11.0: get features successfully as expected 00:13:03.059 0000:00:13.0: get features successfully as expected 00:13:03.059 0000:00:12.0: get features successfully as expected 00:13:03.059 0000:00:10.0: read failed as expected 00:13:03.059 0000:00:11.0: read failed as expected 00:13:03.059 0000:00:13.0: read failed as expected 00:13:03.059 0000:00:12.0: read failed as expected 00:13:03.059 0000:00:10.0: read successfully as expected 00:13:03.059 0000:00:11.0: read successfully as expected 00:13:03.059 0000:00:13.0: read successfully as expected 00:13:03.059 0000:00:12.0: read successfully as expected 00:13:03.059 Cleaning up... 00:13:03.059 00:13:03.059 real 0m0.307s 00:13:03.059 user 0m0.115s 00:13:03.059 sys 0m0.141s 00:13:03.059 ************************************ 00:13:03.059 END TEST nvme_err_injection 00:13:03.059 ************************************ 00:13:03.059 13:54:27 nvme.nvme_err_injection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:03.059 13:54:27 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:13:03.059 13:54:27 nvme -- common/autotest_common.sh@1142 -- # return 0 00:13:03.059 13:54:27 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:13:03.059 13:54:27 nvme -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:13:03.059 13:54:27 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:03.059 13:54:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:03.059 ************************************ 00:13:03.059 START TEST nvme_overhead 00:13:03.059 ************************************ 00:13:03.059 13:54:27 nvme.nvme_overhead -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:13:04.430 Initializing NVMe Controllers 00:13:04.430 Attached to 0000:00:10.0 00:13:04.430 Attached to 0000:00:11.0 00:13:04.430 Attached to 0000:00:13.0 00:13:04.430 Attached to 0000:00:12.0 00:13:04.430 Initialization complete. Launching workers. 00:13:04.430 submit (in ns) avg, min, max = 18238.9, 14842.7, 79753.2 00:13:04.430 complete (in ns) avg, min, max = 12228.1, 9673.2, 145932.7 00:13:04.430 00:13:04.430 Submit histogram 00:13:04.430 ================ 00:13:04.430 Range in us Cumulative Count 00:13:04.430 14.836 - 14.895: 0.0497% ( 6) 00:13:04.430 14.895 - 15.011: 0.5302% ( 58) 00:13:04.430 15.011 - 15.127: 3.3634% ( 342) 00:13:04.430 15.127 - 15.244: 12.8987% ( 1151) 00:13:04.430 15.244 - 15.360: 27.1808% ( 1724) 00:13:04.430 15.360 - 15.476: 39.2262% ( 1454) 00:13:04.430 15.476 - 15.593: 44.2631% ( 608) 00:13:04.430 15.593 - 15.709: 46.0360% ( 214) 00:13:04.430 15.709 - 15.825: 47.0549% ( 123) 00:13:04.430 15.825 - 15.942: 47.6431% ( 71) 00:13:04.430 15.942 - 16.058: 48.0490% ( 49) 00:13:04.430 16.058 - 16.175: 48.5958% ( 66) 00:13:04.430 16.175 - 16.291: 49.1674% ( 69) 00:13:04.430 16.291 - 16.407: 49.9627% ( 96) 00:13:04.430 16.407 - 16.524: 50.9817% ( 123) 00:13:04.430 16.524 - 16.640: 52.1829% ( 145) 00:13:04.430 16.640 - 16.756: 53.1356% ( 115) 00:13:04.430 16.756 - 16.873: 53.7238% ( 71) 00:13:04.430 16.873 - 16.989: 54.0552% ( 40) 00:13:04.430 16.989 - 17.105: 54.2291% ( 21) 00:13:04.430 17.105 - 17.222: 54.5357% ( 37) 00:13:04.430 17.222 - 17.338: 54.7345% ( 24) 00:13:04.430 17.338 - 17.455: 54.9085% ( 21) 00:13:04.430 17.455 - 17.571: 55.2233% ( 38) 00:13:04.430 17.571 - 17.687: 56.2339% ( 122) 00:13:04.430 17.687 - 17.804: 59.8045% ( 431) 00:13:04.430 17.804 - 17.920: 65.8603% ( 731) 00:13:04.430 17.920 - 18.036: 71.5517% ( 687) 00:13:04.430 18.036 - 18.153: 75.2713% ( 449) 00:13:04.430 18.153 - 18.269: 77.4252% ( 260) 00:13:04.430 18.269 - 18.385: 78.4856% ( 128) 00:13:04.430 18.385 - 18.502: 79.3058% ( 99) 00:13:04.430 18.502 - 18.618: 79.8111% ( 61) 00:13:04.430 18.618 - 18.735: 80.1756% ( 44) 00:13:04.430 18.735 - 18.851: 80.5070% ( 40) 00:13:04.430 18.851 - 18.967: 81.0372% ( 64) 00:13:04.430 18.967 - 19.084: 81.6171% ( 70) 00:13:04.430 19.084 - 19.200: 82.2219% ( 73) 00:13:04.430 19.200 - 19.316: 82.7603% ( 65) 00:13:04.430 19.316 - 19.433: 83.4148% ( 79) 00:13:04.430 19.433 - 19.549: 83.6882% ( 33) 00:13:04.430 19.549 - 19.665: 83.9698% ( 34) 00:13:04.430 19.665 - 19.782: 84.1024% ( 16) 00:13:04.430 19.782 - 19.898: 84.2929% ( 23) 00:13:04.430 19.898 - 20.015: 84.3923% ( 12) 00:13:04.430 20.015 - 20.131: 84.4752% ( 10) 00:13:04.430 20.131 - 20.247: 84.5415% ( 8) 00:13:04.430 20.247 - 20.364: 84.6740% ( 16) 00:13:04.430 20.364 - 20.480: 84.8397% ( 20) 00:13:04.430 20.480 - 20.596: 84.9474% ( 13) 00:13:04.430 20.596 - 20.713: 85.1131% ( 20) 00:13:04.430 20.713 - 20.829: 85.2373% ( 15) 00:13:04.430 20.829 - 20.945: 85.3616% ( 15) 00:13:04.430 20.945 - 21.062: 85.4113% ( 6) 00:13:04.430 21.062 - 21.178: 85.5107% ( 12) 00:13:04.430 21.178 - 21.295: 85.5853% ( 9) 00:13:04.430 21.295 - 21.411: 85.6847% ( 12) 00:13:04.430 21.411 - 21.527: 85.7510% ( 8) 00:13:04.430 21.527 - 21.644: 85.8090% ( 7) 00:13:04.430 21.644 - 21.760: 85.8918% ( 10) 00:13:04.430 21.760 - 21.876: 85.9746% ( 10) 00:13:04.430 21.876 - 21.993: 86.0658% ( 11) 00:13:04.430 21.993 - 22.109: 86.1486% ( 10) 00:13:04.430 22.109 - 22.225: 86.2397% ( 11) 00:13:04.430 22.225 - 22.342: 86.3474% ( 13) 00:13:04.430 22.342 - 22.458: 86.4386% ( 11) 00:13:04.430 22.458 - 22.575: 86.5297% ( 11) 00:13:04.430 22.575 - 22.691: 86.5628% ( 4) 00:13:04.430 22.691 - 22.807: 86.6622% ( 12) 00:13:04.430 22.807 - 22.924: 86.7451% ( 10) 00:13:04.430 22.924 - 23.040: 86.8114% ( 8) 00:13:04.430 23.040 - 23.156: 86.9025% ( 11) 00:13:04.430 23.156 - 23.273: 87.0350% ( 16) 00:13:04.430 23.273 - 23.389: 87.0930% ( 7) 00:13:04.430 23.389 - 23.505: 87.1510% ( 7) 00:13:04.430 23.505 - 23.622: 87.2670% ( 14) 00:13:04.430 23.622 - 23.738: 87.4161% ( 18) 00:13:04.430 23.738 - 23.855: 87.5238% ( 13) 00:13:04.430 23.855 - 23.971: 87.5984% ( 9) 00:13:04.430 23.971 - 24.087: 87.7558% ( 19) 00:13:04.430 24.087 - 24.204: 87.8718% ( 14) 00:13:04.430 24.204 - 24.320: 87.9960% ( 15) 00:13:04.430 24.320 - 24.436: 88.1120% ( 14) 00:13:04.430 24.436 - 24.553: 88.2528% ( 17) 00:13:04.430 24.553 - 24.669: 88.3854% ( 16) 00:13:04.430 24.669 - 24.785: 88.4931% ( 13) 00:13:04.430 24.785 - 24.902: 88.6091% ( 14) 00:13:04.430 24.902 - 25.018: 88.6919% ( 10) 00:13:04.430 25.018 - 25.135: 88.8410% ( 18) 00:13:04.430 25.135 - 25.251: 88.9653% ( 15) 00:13:04.430 25.251 - 25.367: 89.0896% ( 15) 00:13:04.430 25.367 - 25.484: 89.1890% ( 12) 00:13:04.430 25.484 - 25.600: 89.2718% ( 10) 00:13:04.430 25.600 - 25.716: 89.3878% ( 14) 00:13:04.430 25.716 - 25.833: 89.5038% ( 14) 00:13:04.430 25.833 - 25.949: 89.6363% ( 16) 00:13:04.430 25.949 - 26.065: 89.7357% ( 12) 00:13:04.430 26.065 - 26.182: 89.8186% ( 10) 00:13:04.430 26.182 - 26.298: 90.0257% ( 25) 00:13:04.430 26.298 - 26.415: 90.1582% ( 16) 00:13:04.430 26.415 - 26.531: 90.3073% ( 18) 00:13:04.430 26.531 - 26.647: 90.4150% ( 13) 00:13:04.430 26.647 - 26.764: 90.5724% ( 19) 00:13:04.430 26.764 - 26.880: 90.6719% ( 12) 00:13:04.430 26.880 - 26.996: 90.7381% ( 8) 00:13:04.430 26.996 - 27.113: 90.9038% ( 20) 00:13:04.430 27.113 - 27.229: 91.0032% ( 12) 00:13:04.430 27.229 - 27.345: 91.0861% ( 10) 00:13:04.430 27.345 - 27.462: 91.1938% ( 13) 00:13:04.430 27.462 - 27.578: 91.2518% ( 7) 00:13:04.430 27.578 - 27.695: 91.3677% ( 14) 00:13:04.430 27.695 - 27.811: 91.4837% ( 14) 00:13:04.430 27.811 - 27.927: 91.6163% ( 16) 00:13:04.430 27.927 - 28.044: 91.8399% ( 27) 00:13:04.430 28.044 - 28.160: 92.0056% ( 20) 00:13:04.430 28.160 - 28.276: 92.1879% ( 22) 00:13:04.430 28.276 - 28.393: 92.4861% ( 36) 00:13:04.430 28.393 - 28.509: 92.6932% ( 25) 00:13:04.430 28.509 - 28.625: 92.9418% ( 30) 00:13:04.430 28.625 - 28.742: 93.2151% ( 33) 00:13:04.430 28.742 - 28.858: 93.5382% ( 39) 00:13:04.430 28.858 - 28.975: 93.7619% ( 27) 00:13:04.430 28.975 - 29.091: 94.0933% ( 40) 00:13:04.430 29.091 - 29.207: 94.3584% ( 32) 00:13:04.430 29.207 - 29.324: 94.6400% ( 34) 00:13:04.430 29.324 - 29.440: 95.0460% ( 49) 00:13:04.430 29.440 - 29.556: 95.3028% ( 31) 00:13:04.430 29.556 - 29.673: 95.5430% ( 29) 00:13:04.430 29.673 - 29.789: 95.7419% ( 24) 00:13:04.430 29.789 - 30.022: 96.2638% ( 63) 00:13:04.430 30.022 - 30.255: 96.6946% ( 52) 00:13:04.430 30.255 - 30.487: 97.0508% ( 43) 00:13:04.430 30.487 - 30.720: 97.3656% ( 38) 00:13:04.430 30.720 - 30.953: 97.5396% ( 21) 00:13:04.430 30.953 - 31.185: 97.7218% ( 22) 00:13:04.430 31.185 - 31.418: 97.9372% ( 26) 00:13:04.430 31.418 - 31.651: 98.1112% ( 21) 00:13:04.430 31.651 - 31.884: 98.2603% ( 18) 00:13:04.430 31.884 - 32.116: 98.3183% ( 7) 00:13:04.430 32.116 - 32.349: 98.4094% ( 11) 00:13:04.430 32.349 - 32.582: 98.4923% ( 10) 00:13:04.430 32.582 - 32.815: 98.5420% ( 6) 00:13:04.430 32.815 - 33.047: 98.5751% ( 4) 00:13:04.430 33.047 - 33.280: 98.6414% ( 8) 00:13:04.430 33.280 - 33.513: 98.6911% ( 6) 00:13:04.430 33.513 - 33.745: 98.7408% ( 6) 00:13:04.430 33.745 - 33.978: 98.7905% ( 6) 00:13:04.430 33.978 - 34.211: 98.8402% ( 6) 00:13:04.430 34.211 - 34.444: 98.8568% ( 2) 00:13:04.430 34.444 - 34.676: 98.8899% ( 4) 00:13:04.430 34.676 - 34.909: 98.9562% ( 8) 00:13:04.430 34.909 - 35.142: 99.0059% ( 6) 00:13:04.430 35.142 - 35.375: 99.0307% ( 3) 00:13:04.430 35.375 - 35.607: 99.0556% ( 3) 00:13:04.430 35.607 - 35.840: 99.0887% ( 4) 00:13:04.430 35.840 - 36.073: 99.1136% ( 3) 00:13:04.430 36.073 - 36.305: 99.1633% ( 6) 00:13:04.430 36.305 - 36.538: 99.1716% ( 1) 00:13:04.430 36.538 - 36.771: 99.2130% ( 5) 00:13:04.430 36.771 - 37.004: 99.2544% ( 5) 00:13:04.430 37.004 - 37.236: 99.2958% ( 5) 00:13:04.430 37.236 - 37.469: 99.3124% ( 2) 00:13:04.430 37.469 - 37.702: 99.3455% ( 4) 00:13:04.431 37.702 - 37.935: 99.3621% ( 2) 00:13:04.431 37.935 - 38.167: 99.3870% ( 3) 00:13:04.431 38.167 - 38.400: 99.4284% ( 5) 00:13:04.431 38.400 - 38.633: 99.4367% ( 1) 00:13:04.431 38.633 - 38.865: 99.4450% ( 1) 00:13:04.431 38.865 - 39.098: 99.4864% ( 5) 00:13:04.431 39.331 - 39.564: 99.5029% ( 2) 00:13:04.431 39.564 - 39.796: 99.5278% ( 3) 00:13:04.431 39.796 - 40.029: 99.5692% ( 5) 00:13:04.431 40.029 - 40.262: 99.5858% ( 2) 00:13:04.431 40.262 - 40.495: 99.6024% ( 2) 00:13:04.431 40.727 - 40.960: 99.6106% ( 1) 00:13:04.431 40.960 - 41.193: 99.6272% ( 2) 00:13:04.431 41.193 - 41.425: 99.6355% ( 1) 00:13:04.431 41.425 - 41.658: 99.6521% ( 2) 00:13:04.431 41.658 - 41.891: 99.6686% ( 2) 00:13:04.431 41.891 - 42.124: 99.6852% ( 2) 00:13:04.431 42.124 - 42.356: 99.7018% ( 2) 00:13:04.431 42.589 - 42.822: 99.7183% ( 2) 00:13:04.431 42.822 - 43.055: 99.7266% ( 1) 00:13:04.431 43.055 - 43.287: 99.7432% ( 2) 00:13:04.431 43.287 - 43.520: 99.7515% ( 1) 00:13:04.431 43.985 - 44.218: 99.7680% ( 2) 00:13:04.431 44.451 - 44.684: 99.7846% ( 2) 00:13:04.431 44.916 - 45.149: 99.7929% ( 1) 00:13:04.431 45.847 - 46.080: 99.8012% ( 1) 00:13:04.431 46.313 - 46.545: 99.8095% ( 1) 00:13:04.431 46.545 - 46.778: 99.8177% ( 1) 00:13:04.431 46.778 - 47.011: 99.8260% ( 1) 00:13:04.431 47.244 - 47.476: 99.8343% ( 1) 00:13:04.431 47.476 - 47.709: 99.8509% ( 2) 00:13:04.431 47.709 - 47.942: 99.8675% ( 2) 00:13:04.431 47.942 - 48.175: 99.8757% ( 1) 00:13:04.431 48.640 - 48.873: 99.8840% ( 1) 00:13:04.431 49.338 - 49.571: 99.8923% ( 1) 00:13:04.431 50.036 - 50.269: 99.9006% ( 1) 00:13:04.431 50.502 - 50.735: 99.9089% ( 1) 00:13:04.431 50.735 - 50.967: 99.9172% ( 1) 00:13:04.431 53.062 - 53.295: 99.9254% ( 1) 00:13:04.431 53.527 - 53.760: 99.9337% ( 1) 00:13:04.431 53.993 - 54.225: 99.9420% ( 1) 00:13:04.431 55.156 - 55.389: 99.9503% ( 1) 00:13:04.431 55.389 - 55.622: 99.9586% ( 1) 00:13:04.431 55.622 - 55.855: 99.9669% ( 1) 00:13:04.431 58.182 - 58.415: 99.9751% ( 1) 00:13:04.431 60.509 - 60.975: 99.9834% ( 1) 00:13:04.431 66.560 - 67.025: 99.9917% ( 1) 00:13:04.431 79.593 - 80.058: 100.0000% ( 1) 00:13:04.431 00:13:04.431 Complete histogram 00:13:04.431 ================== 00:13:04.431 Range in us Cumulative Count 00:13:04.431 9.658 - 9.716: 0.0994% ( 12) 00:13:04.431 9.716 - 9.775: 1.0190% ( 111) 00:13:04.431 9.775 - 9.833: 5.6913% ( 564) 00:13:04.431 9.833 - 9.891: 14.9698% ( 1120) 00:13:04.431 9.891 - 9.949: 26.3358% ( 1372) 00:13:04.431 9.949 - 10.007: 36.2605% ( 1198) 00:13:04.431 10.007 - 10.065: 42.1009% ( 705) 00:13:04.431 10.065 - 10.124: 45.4561% ( 405) 00:13:04.431 10.124 - 10.182: 47.2537% ( 217) 00:13:04.431 10.182 - 10.240: 48.2976% ( 126) 00:13:04.431 10.240 - 10.298: 48.9272% ( 76) 00:13:04.431 10.298 - 10.356: 49.3414% ( 50) 00:13:04.431 10.356 - 10.415: 49.5899% ( 30) 00:13:04.431 10.415 - 10.473: 49.7970% ( 25) 00:13:04.431 10.473 - 10.531: 49.9959% ( 24) 00:13:04.431 10.531 - 10.589: 50.1036% ( 13) 00:13:04.431 10.589 - 10.647: 50.1947% ( 11) 00:13:04.431 10.647 - 10.705: 50.2941% ( 12) 00:13:04.431 10.705 - 10.764: 50.3438% ( 6) 00:13:04.431 10.764 - 10.822: 50.4101% ( 8) 00:13:04.431 10.822 - 10.880: 50.4515% ( 5) 00:13:04.431 10.880 - 10.938: 50.4929% ( 5) 00:13:04.431 10.938 - 10.996: 50.6006% ( 13) 00:13:04.431 10.996 - 11.055: 50.6917% ( 11) 00:13:04.431 11.055 - 11.113: 50.8574% ( 20) 00:13:04.431 11.113 - 11.171: 51.1557% ( 36) 00:13:04.431 11.171 - 11.229: 51.4870% ( 40) 00:13:04.431 11.229 - 11.287: 51.9510% ( 56) 00:13:04.431 11.287 - 11.345: 52.2989% ( 42) 00:13:04.431 11.345 - 11.404: 52.7214% ( 51) 00:13:04.431 11.404 - 11.462: 53.1356% ( 50) 00:13:04.431 11.462 - 11.520: 53.5001% ( 44) 00:13:04.431 11.520 - 11.578: 53.7238% ( 27) 00:13:04.431 11.578 - 11.636: 53.8978% ( 21) 00:13:04.431 11.636 - 11.695: 54.0717% ( 21) 00:13:04.431 11.695 - 11.753: 54.2043% ( 16) 00:13:04.431 11.753 - 11.811: 54.6931% ( 59) 00:13:04.431 11.811 - 11.869: 56.5902% ( 229) 00:13:04.431 11.869 - 11.927: 60.5501% ( 478) 00:13:04.431 11.927 - 11.985: 66.2911% ( 693) 00:13:04.431 11.985 - 12.044: 71.5599% ( 636) 00:13:04.431 12.044 - 12.102: 75.1968% ( 439) 00:13:04.431 12.102 - 12.160: 77.3590% ( 261) 00:13:04.431 12.160 - 12.218: 78.8253% ( 177) 00:13:04.431 12.218 - 12.276: 79.7200% ( 108) 00:13:04.431 12.276 - 12.335: 80.2833% ( 68) 00:13:04.431 12.335 - 12.393: 80.7472% ( 56) 00:13:04.431 12.393 - 12.451: 80.9378% ( 23) 00:13:04.431 12.451 - 12.509: 81.1283% ( 23) 00:13:04.431 12.509 - 12.567: 81.3603% ( 28) 00:13:04.431 12.567 - 12.625: 81.5094% ( 18) 00:13:04.431 12.625 - 12.684: 81.6171% ( 13) 00:13:04.431 12.684 - 12.742: 81.7414% ( 15) 00:13:04.431 12.742 - 12.800: 81.8739% ( 16) 00:13:04.431 12.800 - 12.858: 81.9650% ( 11) 00:13:04.431 12.858 - 12.916: 82.0645% ( 12) 00:13:04.431 12.916 - 12.975: 82.1639% ( 12) 00:13:04.431 12.975 - 13.033: 82.2384% ( 9) 00:13:04.431 13.033 - 13.091: 82.3213% ( 10) 00:13:04.431 13.091 - 13.149: 82.3710% ( 6) 00:13:04.431 13.149 - 13.207: 82.4207% ( 6) 00:13:04.431 13.207 - 13.265: 82.5118% ( 11) 00:13:04.431 13.265 - 13.324: 82.6775% ( 20) 00:13:04.431 13.324 - 13.382: 82.8846% ( 25) 00:13:04.431 13.382 - 13.440: 83.0669% ( 22) 00:13:04.431 13.440 - 13.498: 83.3154% ( 30) 00:13:04.431 13.498 - 13.556: 83.5971% ( 34) 00:13:04.431 13.556 - 13.615: 83.7710% ( 21) 00:13:04.431 13.615 - 13.673: 83.9864% ( 26) 00:13:04.431 13.673 - 13.731: 84.1770% ( 23) 00:13:04.431 13.731 - 13.789: 84.3344% ( 19) 00:13:04.431 13.789 - 13.847: 84.4586% ( 15) 00:13:04.431 13.847 - 13.905: 84.5497% ( 11) 00:13:04.431 13.905 - 13.964: 84.6326% ( 10) 00:13:04.431 13.964 - 14.022: 84.7237% ( 11) 00:13:04.431 14.022 - 14.080: 84.7817% ( 7) 00:13:04.431 14.080 - 14.138: 84.8563% ( 9) 00:13:04.431 14.138 - 14.196: 84.9474% ( 11) 00:13:04.431 14.196 - 14.255: 85.0302% ( 10) 00:13:04.431 14.255 - 14.313: 85.0882% ( 7) 00:13:04.431 14.313 - 14.371: 85.2291% ( 17) 00:13:04.431 14.371 - 14.429: 85.3616% ( 16) 00:13:04.431 14.429 - 14.487: 85.5521% ( 23) 00:13:04.431 14.487 - 14.545: 85.6847% ( 16) 00:13:04.431 14.545 - 14.604: 85.8255% ( 17) 00:13:04.431 14.604 - 14.662: 85.9167% ( 11) 00:13:04.431 14.662 - 14.720: 86.0078% ( 11) 00:13:04.431 14.720 - 14.778: 86.2066% ( 24) 00:13:04.431 14.778 - 14.836: 86.3143% ( 13) 00:13:04.431 14.836 - 14.895: 86.3723% ( 7) 00:13:04.431 14.895 - 15.011: 86.5546% ( 22) 00:13:04.431 15.011 - 15.127: 86.7037% ( 18) 00:13:04.431 15.127 - 15.244: 86.7948% ( 11) 00:13:04.431 15.244 - 15.360: 86.8694% ( 9) 00:13:04.431 15.360 - 15.476: 87.0516% ( 22) 00:13:04.431 15.476 - 15.593: 87.1676% ( 14) 00:13:04.431 15.593 - 15.709: 87.2504% ( 10) 00:13:04.431 15.709 - 15.825: 87.3167% ( 8) 00:13:04.431 15.825 - 15.942: 87.4078% ( 11) 00:13:04.431 15.942 - 16.058: 87.5238% ( 14) 00:13:04.431 16.058 - 16.175: 87.5984% ( 9) 00:13:04.431 16.175 - 16.291: 87.6647% ( 8) 00:13:04.431 16.291 - 16.407: 87.7226% ( 7) 00:13:04.431 16.407 - 16.524: 87.7558% ( 4) 00:13:04.431 16.524 - 16.640: 87.8386% ( 10) 00:13:04.431 16.640 - 16.756: 87.9049% ( 8) 00:13:04.431 16.756 - 16.873: 87.9712% ( 8) 00:13:04.431 16.873 - 16.989: 88.0623% ( 11) 00:13:04.431 16.989 - 17.105: 88.2280% ( 20) 00:13:04.431 17.105 - 17.222: 88.3605% ( 16) 00:13:04.431 17.222 - 17.338: 88.4765% ( 14) 00:13:04.431 17.338 - 17.455: 88.5428% ( 8) 00:13:04.431 17.455 - 17.571: 88.6422% ( 12) 00:13:04.431 17.571 - 17.687: 88.7913% ( 18) 00:13:04.431 17.687 - 17.804: 88.8907% ( 12) 00:13:04.431 17.804 - 17.920: 89.0150% ( 15) 00:13:04.431 17.920 - 18.036: 89.0978% ( 10) 00:13:04.431 18.036 - 18.153: 89.2055% ( 13) 00:13:04.431 18.153 - 18.269: 89.3132% ( 13) 00:13:04.431 18.269 - 18.385: 89.4126% ( 12) 00:13:04.431 18.385 - 18.502: 89.6032% ( 23) 00:13:04.431 18.502 - 18.618: 89.8103% ( 25) 00:13:04.431 18.618 - 18.735: 90.0091% ( 24) 00:13:04.431 18.735 - 18.851: 90.2411% ( 28) 00:13:04.431 18.851 - 18.967: 90.5559% ( 38) 00:13:04.431 18.967 - 19.084: 90.8375% ( 34) 00:13:04.431 19.084 - 19.200: 91.1358% ( 36) 00:13:04.431 19.200 - 19.316: 91.4672% ( 40) 00:13:04.431 19.316 - 19.433: 91.8565% ( 47) 00:13:04.431 19.433 - 19.549: 92.2707% ( 50) 00:13:04.431 19.549 - 19.665: 92.6518% ( 46) 00:13:04.431 19.665 - 19.782: 93.0743% ( 51) 00:13:04.431 19.782 - 19.898: 93.3477% ( 33) 00:13:04.432 19.898 - 20.015: 93.6873% ( 41) 00:13:04.432 20.015 - 20.131: 93.9607% ( 33) 00:13:04.432 20.131 - 20.247: 94.2341% ( 33) 00:13:04.432 20.247 - 20.364: 94.5075% ( 33) 00:13:04.432 20.364 - 20.480: 94.7395% ( 28) 00:13:04.432 20.480 - 20.596: 94.9217% ( 22) 00:13:04.432 20.596 - 20.713: 95.1371% ( 26) 00:13:04.432 20.713 - 20.829: 95.4022% ( 32) 00:13:04.432 20.829 - 20.945: 95.5762% ( 21) 00:13:04.432 20.945 - 21.062: 95.8330% ( 31) 00:13:04.432 21.062 - 21.178: 96.1064% ( 33) 00:13:04.432 21.178 - 21.295: 96.3798% ( 33) 00:13:04.432 21.295 - 21.411: 96.6117% ( 28) 00:13:04.432 21.411 - 21.527: 96.6780% ( 8) 00:13:04.432 21.527 - 21.644: 96.8023% ( 15) 00:13:04.432 21.644 - 21.760: 96.9679% ( 20) 00:13:04.432 21.760 - 21.876: 97.0922% ( 15) 00:13:04.432 21.876 - 21.993: 97.1750% ( 10) 00:13:04.432 21.993 - 22.109: 97.3242% ( 18) 00:13:04.432 22.109 - 22.225: 97.4236% ( 12) 00:13:04.432 22.225 - 22.342: 97.6058% ( 22) 00:13:04.432 22.342 - 22.458: 97.7964% ( 23) 00:13:04.432 22.458 - 22.575: 97.8792% ( 10) 00:13:04.432 22.575 - 22.691: 97.9786% ( 12) 00:13:04.432 22.691 - 22.807: 98.0366% ( 7) 00:13:04.432 22.807 - 22.924: 98.1277% ( 11) 00:13:04.432 22.924 - 23.040: 98.1360% ( 1) 00:13:04.432 23.040 - 23.156: 98.2023% ( 8) 00:13:04.432 23.156 - 23.273: 98.2520% ( 6) 00:13:04.432 23.273 - 23.389: 98.3349% ( 10) 00:13:04.432 23.389 - 23.505: 98.3928% ( 7) 00:13:04.432 23.505 - 23.622: 98.4508% ( 7) 00:13:04.432 23.622 - 23.738: 98.4591% ( 1) 00:13:04.432 23.738 - 23.855: 98.5005% ( 5) 00:13:04.432 23.855 - 23.971: 98.5420% ( 5) 00:13:04.432 23.971 - 24.087: 98.5585% ( 2) 00:13:04.432 24.087 - 24.204: 98.5668% ( 1) 00:13:04.432 24.204 - 24.320: 98.6082% ( 5) 00:13:04.432 24.320 - 24.436: 98.6414% ( 4) 00:13:04.432 24.436 - 24.553: 98.6497% ( 1) 00:13:04.432 24.553 - 24.669: 98.6994% ( 6) 00:13:04.432 24.669 - 24.785: 98.7325% ( 4) 00:13:04.432 24.785 - 24.902: 98.7739% ( 5) 00:13:04.432 24.902 - 25.018: 98.8071% ( 4) 00:13:04.432 25.018 - 25.135: 98.8485% ( 5) 00:13:04.432 25.135 - 25.251: 98.8899% ( 5) 00:13:04.432 25.251 - 25.367: 98.9065% ( 2) 00:13:04.432 25.367 - 25.484: 98.9230% ( 2) 00:13:04.432 25.484 - 25.600: 98.9645% ( 5) 00:13:04.432 25.600 - 25.716: 98.9727% ( 1) 00:13:04.432 25.833 - 25.949: 98.9893% ( 2) 00:13:04.432 25.949 - 26.065: 99.0390% ( 6) 00:13:04.432 26.065 - 26.182: 99.0639% ( 3) 00:13:04.432 26.182 - 26.298: 99.0722% ( 1) 00:13:04.432 26.298 - 26.415: 99.0887% ( 2) 00:13:04.432 26.415 - 26.531: 99.1136% ( 3) 00:13:04.432 26.531 - 26.647: 99.1384% ( 3) 00:13:04.432 26.647 - 26.764: 99.1633% ( 3) 00:13:04.432 26.880 - 26.996: 99.1716% ( 1) 00:13:04.432 26.996 - 27.113: 99.1799% ( 1) 00:13:04.432 27.113 - 27.229: 99.1964% ( 2) 00:13:04.432 27.229 - 27.345: 99.2047% ( 1) 00:13:04.432 27.345 - 27.462: 99.2130% ( 1) 00:13:04.432 27.462 - 27.578: 99.2461% ( 4) 00:13:04.432 27.578 - 27.695: 99.2627% ( 2) 00:13:04.432 27.695 - 27.811: 99.2710% ( 1) 00:13:04.432 27.811 - 27.927: 99.2875% ( 2) 00:13:04.432 27.927 - 28.044: 99.3041% ( 2) 00:13:04.432 28.044 - 28.160: 99.3124% ( 1) 00:13:04.432 28.160 - 28.276: 99.3207% ( 1) 00:13:04.432 28.276 - 28.393: 99.3290% ( 1) 00:13:04.432 28.393 - 28.509: 99.3455% ( 2) 00:13:04.432 28.509 - 28.625: 99.3538% ( 1) 00:13:04.432 28.625 - 28.742: 99.3621% ( 1) 00:13:04.432 28.742 - 28.858: 99.3704% ( 1) 00:13:04.432 28.975 - 29.091: 99.3787% ( 1) 00:13:04.432 29.091 - 29.207: 99.3952% ( 2) 00:13:04.432 29.324 - 29.440: 99.4118% ( 2) 00:13:04.432 29.440 - 29.556: 99.4284% ( 2) 00:13:04.432 29.556 - 29.673: 99.4450% ( 2) 00:13:04.432 29.789 - 30.022: 99.4698% ( 3) 00:13:04.432 30.022 - 30.255: 99.4947% ( 3) 00:13:04.432 30.255 - 30.487: 99.5112% ( 2) 00:13:04.432 30.487 - 30.720: 99.5195% ( 1) 00:13:04.432 30.720 - 30.953: 99.5278% ( 1) 00:13:04.432 31.185 - 31.418: 99.5444% ( 2) 00:13:04.432 31.418 - 31.651: 99.5526% ( 1) 00:13:04.432 31.651 - 31.884: 99.5692% ( 2) 00:13:04.432 32.116 - 32.349: 99.5775% ( 1) 00:13:04.432 32.349 - 32.582: 99.5941% ( 2) 00:13:04.432 32.582 - 32.815: 99.6106% ( 2) 00:13:04.432 32.815 - 33.047: 99.6189% ( 1) 00:13:04.432 33.047 - 33.280: 99.6521% ( 4) 00:13:04.432 33.280 - 33.513: 99.6603% ( 1) 00:13:04.432 33.745 - 33.978: 99.6852% ( 3) 00:13:04.432 33.978 - 34.211: 99.6935% ( 1) 00:13:04.432 34.211 - 34.444: 99.7018% ( 1) 00:13:04.432 34.444 - 34.676: 99.7100% ( 1) 00:13:04.432 34.676 - 34.909: 99.7183% ( 1) 00:13:04.432 34.909 - 35.142: 99.7266% ( 1) 00:13:04.432 35.142 - 35.375: 99.7349% ( 1) 00:13:04.432 35.375 - 35.607: 99.7515% ( 2) 00:13:04.432 36.073 - 36.305: 99.7598% ( 1) 00:13:04.432 36.305 - 36.538: 99.7680% ( 1) 00:13:04.432 37.236 - 37.469: 99.7763% ( 1) 00:13:04.432 37.935 - 38.167: 99.7846% ( 1) 00:13:04.432 38.633 - 38.865: 99.7929% ( 1) 00:13:04.432 38.865 - 39.098: 99.8095% ( 2) 00:13:04.432 39.098 - 39.331: 99.8177% ( 1) 00:13:04.432 39.331 - 39.564: 99.8343% ( 2) 00:13:04.432 39.564 - 39.796: 99.8426% ( 1) 00:13:04.432 39.796 - 40.029: 99.8509% ( 1) 00:13:04.432 40.029 - 40.262: 99.8675% ( 2) 00:13:04.432 40.495 - 40.727: 99.8757% ( 1) 00:13:04.432 40.727 - 40.960: 99.8923% ( 2) 00:13:04.432 40.960 - 41.193: 99.9006% ( 1) 00:13:04.432 41.425 - 41.658: 99.9172% ( 2) 00:13:04.432 41.658 - 41.891: 99.9503% ( 4) 00:13:04.432 43.287 - 43.520: 99.9586% ( 1) 00:13:04.432 51.665 - 51.898: 99.9669% ( 1) 00:13:04.432 63.767 - 64.233: 99.9751% ( 1) 00:13:04.432 66.095 - 66.560: 99.9834% ( 1) 00:13:04.432 83.316 - 83.782: 99.9917% ( 1) 00:13:04.432 145.222 - 146.153: 100.0000% ( 1) 00:13:04.432 00:13:04.432 00:13:04.432 real 0m1.289s 00:13:04.432 user 0m1.129s 00:13:04.432 sys 0m0.108s 00:13:04.432 13:54:28 nvme.nvme_overhead -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:04.432 13:54:28 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:13:04.432 ************************************ 00:13:04.432 END TEST nvme_overhead 00:13:04.432 ************************************ 00:13:04.432 13:54:28 nvme -- common/autotest_common.sh@1142 -- # return 0 00:13:04.432 13:54:28 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:13:04.432 13:54:28 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:13:04.432 13:54:28 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:04.432 13:54:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.432 ************************************ 00:13:04.432 START TEST nvme_arbitration 00:13:04.432 ************************************ 00:13:04.432 13:54:28 nvme.nvme_arbitration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:13:08.614 Initializing NVMe Controllers 00:13:08.614 Attached to 0000:00:10.0 00:13:08.614 Attached to 0000:00:11.0 00:13:08.614 Attached to 0000:00:13.0 00:13:08.614 Attached to 0000:00:12.0 00:13:08.614 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:13:08.614 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:13:08.614 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:13:08.614 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:13:08.614 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:13:08.614 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:13:08.614 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:13:08.614 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:13:08.614 Initialization complete. Launching workers. 00:13:08.614 Starting thread on core 1 with urgent priority queue 00:13:08.614 Starting thread on core 2 with urgent priority queue 00:13:08.614 Starting thread on core 3 with urgent priority queue 00:13:08.614 Starting thread on core 0 with urgent priority queue 00:13:08.614 QEMU NVMe Ctrl (12340 ) core 0: 490.67 IO/s 203.80 secs/100000 ios 00:13:08.614 QEMU NVMe Ctrl (12342 ) core 0: 490.67 IO/s 203.80 secs/100000 ios 00:13:08.614 QEMU NVMe Ctrl (12341 ) core 1: 576.00 IO/s 173.61 secs/100000 ios 00:13:08.614 QEMU NVMe Ctrl (12342 ) core 1: 576.00 IO/s 173.61 secs/100000 ios 00:13:08.614 QEMU NVMe Ctrl (12343 ) core 2: 768.00 IO/s 130.21 secs/100000 ios 00:13:08.614 QEMU NVMe Ctrl (12342 ) core 3: 618.67 IO/s 161.64 secs/100000 ios 00:13:08.614 ======================================================== 00:13:08.614 00:13:08.614 00:13:08.614 real 0m3.468s 00:13:08.614 user 0m9.439s 00:13:08.614 sys 0m0.154s 00:13:08.614 ************************************ 00:13:08.614 END TEST nvme_arbitration 00:13:08.614 ************************************ 00:13:08.614 13:54:32 nvme.nvme_arbitration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:08.614 13:54:32 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:13:08.614 13:54:32 nvme -- common/autotest_common.sh@1142 -- # return 0 00:13:08.614 13:54:32 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:13:08.614 13:54:32 nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:08.614 13:54:32 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:08.614 13:54:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:08.614 ************************************ 00:13:08.614 START TEST nvme_single_aen 00:13:08.614 ************************************ 00:13:08.614 13:54:32 nvme.nvme_single_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:13:08.614 Asynchronous Event Request test 00:13:08.614 Attached to 0000:00:10.0 00:13:08.614 Attached to 0000:00:11.0 00:13:08.614 Attached to 0000:00:13.0 00:13:08.614 Attached to 0000:00:12.0 00:13:08.614 Reset controller to setup AER completions for this process 00:13:08.614 Registering asynchronous event callbacks... 00:13:08.614 Getting orig temperature thresholds of all controllers 00:13:08.614 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:08.614 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:08.614 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:08.614 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:08.614 Setting all controllers temperature threshold low to trigger AER 00:13:08.614 Waiting for all controllers temperature threshold to be set lower 00:13:08.614 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:08.614 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:13:08.614 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:08.614 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:13:08.614 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:08.614 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:13:08.614 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:08.614 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:13:08.614 Waiting for all controllers to trigger AER and reset threshold 00:13:08.614 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:08.614 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:08.614 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:08.614 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:08.614 Cleaning up... 00:13:08.614 00:13:08.614 real 0m0.281s 00:13:08.614 user 0m0.123s 00:13:08.614 sys 0m0.108s 00:13:08.614 13:54:32 nvme.nvme_single_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:08.614 ************************************ 00:13:08.614 END TEST nvme_single_aen 00:13:08.614 ************************************ 00:13:08.614 13:54:32 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:13:08.614 13:54:32 nvme -- common/autotest_common.sh@1142 -- # return 0 00:13:08.614 13:54:32 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:13:08.614 13:54:32 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:08.614 13:54:32 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:08.614 13:54:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:08.614 ************************************ 00:13:08.614 START TEST nvme_doorbell_aers 00:13:08.614 ************************************ 00:13:08.614 13:54:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1123 -- # nvme_doorbell_aers 00:13:08.614 13:54:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:13:08.614 13:54:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:13:08.614 13:54:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:13:08.614 13:54:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:13:08.614 13:54:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:13:08.614 13:54:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:13:08.614 13:54:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:08.614 13:54:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:08.614 13:54:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:13:08.614 13:54:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:13:08.614 13:54:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:08.614 13:54:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:13:08.614 13:54:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:08.614 [2024-07-15 13:54:33.032443] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70279) is not found. Dropping the request. 00:13:18.578 Executing: test_write_invalid_db 00:13:18.578 Waiting for AER completion... 00:13:18.578 Failure: test_write_invalid_db 00:13:18.578 00:13:18.578 Executing: test_invalid_db_write_overflow_sq 00:13:18.578 Waiting for AER completion... 00:13:18.578 Failure: test_invalid_db_write_overflow_sq 00:13:18.578 00:13:18.578 Executing: test_invalid_db_write_overflow_cq 00:13:18.578 Waiting for AER completion... 00:13:18.578 Failure: test_invalid_db_write_overflow_cq 00:13:18.578 00:13:18.578 13:54:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:13:18.578 13:54:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:13:18.578 [2024-07-15 13:54:43.024426] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70279) is not found. Dropping the request. 00:13:28.553 Executing: test_write_invalid_db 00:13:28.553 Waiting for AER completion... 00:13:28.553 Failure: test_write_invalid_db 00:13:28.553 00:13:28.553 Executing: test_invalid_db_write_overflow_sq 00:13:28.553 Waiting for AER completion... 00:13:28.553 Failure: test_invalid_db_write_overflow_sq 00:13:28.553 00:13:28.553 Executing: test_invalid_db_write_overflow_cq 00:13:28.553 Waiting for AER completion... 00:13:28.553 Failure: test_invalid_db_write_overflow_cq 00:13:28.553 00:13:28.553 13:54:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:13:28.553 13:54:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:13:28.553 [2024-07-15 13:54:53.064709] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70279) is not found. Dropping the request. 00:13:38.525 Executing: test_write_invalid_db 00:13:38.525 Waiting for AER completion... 00:13:38.525 Failure: test_write_invalid_db 00:13:38.525 00:13:38.525 Executing: test_invalid_db_write_overflow_sq 00:13:38.525 Waiting for AER completion... 00:13:38.525 Failure: test_invalid_db_write_overflow_sq 00:13:38.525 00:13:38.525 Executing: test_invalid_db_write_overflow_cq 00:13:38.525 Waiting for AER completion... 00:13:38.525 Failure: test_invalid_db_write_overflow_cq 00:13:38.525 00:13:38.525 13:55:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:13:38.525 13:55:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:13:38.783 [2024-07-15 13:55:03.147919] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70279) is not found. Dropping the request. 00:13:48.744 Executing: test_write_invalid_db 00:13:48.744 Waiting for AER completion... 00:13:48.744 Failure: test_write_invalid_db 00:13:48.744 00:13:48.744 Executing: test_invalid_db_write_overflow_sq 00:13:48.744 Waiting for AER completion... 00:13:48.744 Failure: test_invalid_db_write_overflow_sq 00:13:48.744 00:13:48.744 Executing: test_invalid_db_write_overflow_cq 00:13:48.744 Waiting for AER completion... 00:13:48.744 Failure: test_invalid_db_write_overflow_cq 00:13:48.744 00:13:48.744 00:13:48.744 real 0m40.240s 00:13:48.744 user 0m33.962s 00:13:48.744 sys 0m5.891s 00:13:48.744 13:55:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:48.744 13:55:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:13:48.744 ************************************ 00:13:48.744 END TEST nvme_doorbell_aers 00:13:48.744 ************************************ 00:13:48.744 13:55:12 nvme -- common/autotest_common.sh@1142 -- # return 0 00:13:48.744 13:55:12 nvme -- nvme/nvme.sh@97 -- # uname 00:13:48.744 13:55:12 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:13:48.744 13:55:12 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:13:48.744 13:55:12 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:13:48.744 13:55:12 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:48.744 13:55:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:48.744 ************************************ 00:13:48.744 START TEST nvme_multi_aen 00:13:48.744 ************************************ 00:13:48.744 13:55:12 nvme.nvme_multi_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:13:48.744 [2024-07-15 13:55:13.166652] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70279) is not found. Dropping the request. 00:13:48.744 [2024-07-15 13:55:13.166811] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70279) is not found. Dropping the request. 00:13:48.744 [2024-07-15 13:55:13.166866] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70279) is not found. Dropping the request. 00:13:48.744 [2024-07-15 13:55:13.168807] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70279) is not found. Dropping the request. 00:13:48.744 [2024-07-15 13:55:13.168869] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70279) is not found. Dropping the request. 00:13:48.744 [2024-07-15 13:55:13.168909] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70279) is not found. Dropping the request. 00:13:48.744 [2024-07-15 13:55:13.170826] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70279) is not found. Dropping the request. 00:13:48.744 [2024-07-15 13:55:13.171095] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70279) is not found. Dropping the request. 00:13:48.744 [2024-07-15 13:55:13.171343] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70279) is not found. Dropping the request. 00:13:48.744 [2024-07-15 13:55:13.173381] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70279) is not found. Dropping the request. 00:13:48.744 [2024-07-15 13:55:13.173657] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70279) is not found. Dropping the request. 00:13:48.744 [2024-07-15 13:55:13.173975] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70279) is not found. Dropping the request. 00:13:48.744 Child process pid: 70795 00:13:49.001 [Child] Asynchronous Event Request test 00:13:49.001 [Child] Attached to 0000:00:10.0 00:13:49.001 [Child] Attached to 0000:00:11.0 00:13:49.001 [Child] Attached to 0000:00:13.0 00:13:49.001 [Child] Attached to 0000:00:12.0 00:13:49.001 [Child] Registering asynchronous event callbacks... 00:13:49.001 [Child] Getting orig temperature thresholds of all controllers 00:13:49.001 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:49.001 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:49.001 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:49.001 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:49.001 [Child] Waiting for all controllers to trigger AER and reset threshold 00:13:49.001 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:49.002 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:49.002 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:49.002 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:49.002 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:49.002 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:49.002 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:49.002 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:49.002 [Child] Cleaning up... 00:13:49.002 Asynchronous Event Request test 00:13:49.002 Attached to 0000:00:10.0 00:13:49.002 Attached to 0000:00:11.0 00:13:49.002 Attached to 0000:00:13.0 00:13:49.002 Attached to 0000:00:12.0 00:13:49.002 Reset controller to setup AER completions for this process 00:13:49.002 Registering asynchronous event callbacks... 00:13:49.002 Getting orig temperature thresholds of all controllers 00:13:49.002 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:49.002 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:49.002 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:49.002 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:49.002 Setting all controllers temperature threshold low to trigger AER 00:13:49.002 Waiting for all controllers temperature threshold to be set lower 00:13:49.002 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:49.002 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:13:49.002 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:49.002 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:13:49.002 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:49.002 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:13:49.002 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:49.002 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:13:49.002 Waiting for all controllers to trigger AER and reset threshold 00:13:49.002 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:49.002 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:49.002 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:49.002 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:49.002 Cleaning up... 00:13:49.002 00:13:49.002 real 0m0.578s 00:13:49.002 user 0m0.219s 00:13:49.002 sys 0m0.252s 00:13:49.002 13:55:13 nvme.nvme_multi_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:49.002 13:55:13 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:13:49.002 ************************************ 00:13:49.002 END TEST nvme_multi_aen 00:13:49.002 ************************************ 00:13:49.259 13:55:13 nvme -- common/autotest_common.sh@1142 -- # return 0 00:13:49.259 13:55:13 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:13:49.259 13:55:13 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:49.259 13:55:13 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:49.259 13:55:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:49.259 ************************************ 00:13:49.259 START TEST nvme_startup 00:13:49.259 ************************************ 00:13:49.259 13:55:13 nvme.nvme_startup -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:13:49.517 Initializing NVMe Controllers 00:13:49.517 Attached to 0000:00:10.0 00:13:49.517 Attached to 0000:00:11.0 00:13:49.517 Attached to 0000:00:13.0 00:13:49.517 Attached to 0000:00:12.0 00:13:49.517 Initialization complete. 00:13:49.517 Time used:178950.984 (us). 00:13:49.517 00:13:49.517 real 0m0.269s 00:13:49.517 user 0m0.089s 00:13:49.517 sys 0m0.132s 00:13:49.517 13:55:13 nvme.nvme_startup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:49.517 ************************************ 00:13:49.517 END TEST nvme_startup 00:13:49.517 ************************************ 00:13:49.517 13:55:13 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:13:49.517 13:55:13 nvme -- common/autotest_common.sh@1142 -- # return 0 00:13:49.517 13:55:13 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:13:49.517 13:55:13 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:49.517 13:55:13 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:49.517 13:55:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:49.517 ************************************ 00:13:49.517 START TEST nvme_multi_secondary 00:13:49.517 ************************************ 00:13:49.517 13:55:13 nvme.nvme_multi_secondary -- common/autotest_common.sh@1123 -- # nvme_multi_secondary 00:13:49.517 13:55:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=70851 00:13:49.517 13:55:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:13:49.517 13:55:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=70852 00:13:49.517 13:55:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:13:49.517 13:55:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:13:52.803 Initializing NVMe Controllers 00:13:52.803 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:52.803 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:52.803 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:52.803 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:52.803 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:13:52.803 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:13:52.803 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:13:52.803 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:13:52.803 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:13:52.803 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:13:52.803 Initialization complete. Launching workers. 00:13:52.803 ======================================================== 00:13:52.803 Latency(us) 00:13:52.803 Device Information : IOPS MiB/s Average min max 00:13:52.803 PCIE (0000:00:10.0) NSID 1 from core 2: 2541.30 9.93 6293.37 1051.94 14394.97 00:13:52.803 PCIE (0000:00:11.0) NSID 1 from core 2: 2541.30 9.93 6286.29 1090.73 14618.19 00:13:52.803 PCIE (0000:00:13.0) NSID 1 from core 2: 2541.30 9.93 6286.71 1073.51 14548.74 00:13:52.803 PCIE (0000:00:12.0) NSID 1 from core 2: 2541.30 9.93 6287.19 1063.73 15043.31 00:13:52.803 PCIE (0000:00:12.0) NSID 2 from core 2: 2546.63 9.95 6273.57 1096.48 14701.57 00:13:52.803 PCIE (0000:00:12.0) NSID 3 from core 2: 2551.95 9.97 6261.09 1081.59 14944.59 00:13:52.803 ======================================================== 00:13:52.803 Total : 15263.77 59.62 6281.35 1051.94 15043.31 00:13:52.803 00:13:52.803 13:55:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 70851 00:13:52.803 Initializing NVMe Controllers 00:13:52.803 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:52.803 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:52.803 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:52.803 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:52.803 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:13:52.803 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:13:52.803 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:13:52.803 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:13:52.803 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:13:52.803 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:13:52.803 Initialization complete. Launching workers. 00:13:52.803 ======================================================== 00:13:52.803 Latency(us) 00:13:52.803 Device Information : IOPS MiB/s Average min max 00:13:52.803 PCIE (0000:00:10.0) NSID 1 from core 1: 4748.52 18.55 3366.68 1268.08 9140.43 00:13:52.803 PCIE (0000:00:11.0) NSID 1 from core 1: 4748.52 18.55 3368.12 1198.10 8781.73 00:13:52.803 PCIE (0000:00:13.0) NSID 1 from core 1: 4748.52 18.55 3367.77 1234.87 8902.79 00:13:52.803 PCIE (0000:00:12.0) NSID 1 from core 1: 4748.52 18.55 3367.41 1152.76 8936.71 00:13:52.803 PCIE (0000:00:12.0) NSID 2 from core 1: 4748.52 18.55 3367.29 1303.22 9066.48 00:13:52.803 PCIE (0000:00:12.0) NSID 3 from core 1: 4748.52 18.55 3366.95 1305.89 9168.21 00:13:52.803 ======================================================== 00:13:52.803 Total : 28491.10 111.29 3367.37 1152.76 9168.21 00:13:52.803 00:13:55.327 Initializing NVMe Controllers 00:13:55.327 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:55.327 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:55.327 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:55.327 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:55.327 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:55.327 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:55.327 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:55.327 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:55.327 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:55.327 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:55.327 Initialization complete. Launching workers. 00:13:55.327 ======================================================== 00:13:55.327 Latency(us) 00:13:55.327 Device Information : IOPS MiB/s Average min max 00:13:55.327 PCIE (0000:00:10.0) NSID 1 from core 0: 6493.57 25.37 2461.72 953.87 9968.13 00:13:55.327 PCIE (0000:00:11.0) NSID 1 from core 0: 6493.57 25.37 2463.35 978.19 10113.68 00:13:55.327 PCIE (0000:00:13.0) NSID 1 from core 0: 6493.57 25.37 2463.36 994.95 9958.03 00:13:55.327 PCIE (0000:00:12.0) NSID 1 from core 0: 6493.57 25.37 2463.37 983.22 10267.31 00:13:55.327 PCIE (0000:00:12.0) NSID 2 from core 0: 6493.57 25.37 2463.37 986.93 10796.02 00:13:55.327 PCIE (0000:00:12.0) NSID 3 from core 0: 6493.57 25.37 2463.37 968.52 11091.42 00:13:55.327 ======================================================== 00:13:55.327 Total : 38961.41 152.19 2463.09 953.87 11091.42 00:13:55.327 00:13:55.327 13:55:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 70852 00:13:55.327 13:55:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=70927 00:13:55.327 13:55:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:13:55.327 13:55:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=70928 00:13:55.327 13:55:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:13:55.327 13:55:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:13:58.609 Initializing NVMe Controllers 00:13:58.609 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:58.609 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:58.609 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:58.609 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:58.609 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:58.609 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:58.609 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:58.609 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:58.609 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:58.609 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:58.609 Initialization complete. Launching workers. 00:13:58.609 ======================================================== 00:13:58.609 Latency(us) 00:13:58.609 Device Information : IOPS MiB/s Average min max 00:13:58.609 PCIE (0000:00:10.0) NSID 1 from core 0: 4940.40 19.30 3236.67 959.40 12233.24 00:13:58.609 PCIE (0000:00:11.0) NSID 1 from core 0: 4940.40 19.30 3238.49 991.84 12215.00 00:13:58.609 PCIE (0000:00:13.0) NSID 1 from core 0: 4940.40 19.30 3238.55 997.45 11798.95 00:13:58.609 PCIE (0000:00:12.0) NSID 1 from core 0: 4940.40 19.30 3238.67 993.63 11895.29 00:13:58.609 PCIE (0000:00:12.0) NSID 2 from core 0: 4940.40 19.30 3238.72 1008.50 12000.78 00:13:58.609 PCIE (0000:00:12.0) NSID 3 from core 0: 4940.40 19.30 3238.93 996.79 12270.05 00:13:58.609 ======================================================== 00:13:58.609 Total : 29642.40 115.79 3238.34 959.40 12270.05 00:13:58.609 00:13:58.609 Initializing NVMe Controllers 00:13:58.609 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:58.609 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:58.609 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:58.609 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:58.609 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:13:58.609 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:13:58.609 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:13:58.609 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:13:58.609 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:13:58.609 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:13:58.609 Initialization complete. Launching workers. 00:13:58.609 ======================================================== 00:13:58.609 Latency(us) 00:13:58.609 Device Information : IOPS MiB/s Average min max 00:13:58.609 PCIE (0000:00:10.0) NSID 1 from core 1: 4418.65 17.26 3618.83 1471.31 10170.92 00:13:58.609 PCIE (0000:00:11.0) NSID 1 from core 1: 4418.65 17.26 3620.43 1425.18 10075.81 00:13:58.609 PCIE (0000:00:13.0) NSID 1 from core 1: 4418.65 17.26 3620.43 1487.63 9910.42 00:13:58.609 PCIE (0000:00:12.0) NSID 1 from core 1: 4418.65 17.26 3620.71 1457.99 10108.98 00:13:58.609 PCIE (0000:00:12.0) NSID 2 from core 1: 4418.65 17.26 3620.66 1459.34 10152.60 00:13:58.609 PCIE (0000:00:12.0) NSID 3 from core 1: 4418.65 17.26 3620.64 1320.63 10261.68 00:13:58.609 ======================================================== 00:13:58.609 Total : 26511.92 103.56 3620.28 1320.63 10261.68 00:13:58.609 00:14:00.508 Initializing NVMe Controllers 00:14:00.508 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:00.508 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:00.508 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:00.508 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:00.508 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:14:00.508 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:14:00.508 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:14:00.508 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:14:00.508 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:14:00.508 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:14:00.508 Initialization complete. Launching workers. 00:14:00.508 ======================================================== 00:14:00.508 Latency(us) 00:14:00.508 Device Information : IOPS MiB/s Average min max 00:14:00.508 PCIE (0000:00:10.0) NSID 1 from core 2: 3308.72 12.92 4832.67 952.61 19996.65 00:14:00.508 PCIE (0000:00:11.0) NSID 1 from core 2: 3308.72 12.92 4834.64 984.38 20227.92 00:14:00.508 PCIE (0000:00:13.0) NSID 1 from core 2: 3308.72 12.92 4834.07 964.82 20308.92 00:14:00.508 PCIE (0000:00:12.0) NSID 1 from core 2: 3308.72 12.92 4834.97 960.18 20090.48 00:14:00.508 PCIE (0000:00:12.0) NSID 2 from core 2: 3308.72 12.92 4834.18 959.39 20270.99 00:14:00.508 PCIE (0000:00:12.0) NSID 3 from core 2: 3308.72 12.92 4834.84 923.70 20233.54 00:14:00.508 ======================================================== 00:14:00.508 Total : 19852.34 77.55 4834.23 923.70 20308.92 00:14:00.508 00:14:00.767 ************************************ 00:14:00.767 END TEST nvme_multi_secondary 00:14:00.767 ************************************ 00:14:00.767 13:55:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 70927 00:14:00.767 13:55:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 70928 00:14:00.767 00:14:00.767 real 0m11.227s 00:14:00.767 user 0m18.550s 00:14:00.767 sys 0m0.964s 00:14:00.767 13:55:25 nvme.nvme_multi_secondary -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:00.767 13:55:25 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:14:00.767 13:55:25 nvme -- common/autotest_common.sh@1142 -- # return 0 00:14:00.767 13:55:25 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:14:00.767 13:55:25 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:14:00.767 13:55:25 nvme -- common/autotest_common.sh@1087 -- # [[ -e /proc/69861 ]] 00:14:00.767 13:55:25 nvme -- common/autotest_common.sh@1088 -- # kill 69861 00:14:00.767 13:55:25 nvme -- common/autotest_common.sh@1089 -- # wait 69861 00:14:00.767 [2024-07-15 13:55:25.126780] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70794) is not found. Dropping the request. 00:14:00.767 [2024-07-15 13:55:25.126868] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70794) is not found. Dropping the request. 00:14:00.767 [2024-07-15 13:55:25.126900] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70794) is not found. Dropping the request. 00:14:00.767 [2024-07-15 13:55:25.126928] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70794) is not found. Dropping the request. 00:14:00.767 [2024-07-15 13:55:25.129673] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70794) is not found. Dropping the request. 00:14:00.767 [2024-07-15 13:55:25.129946] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70794) is not found. Dropping the request. 00:14:00.767 [2024-07-15 13:55:25.130165] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70794) is not found. Dropping the request. 00:14:00.767 [2024-07-15 13:55:25.130407] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70794) is not found. Dropping the request. 00:14:00.767 [2024-07-15 13:55:25.133167] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70794) is not found. Dropping the request. 00:14:00.767 [2024-07-15 13:55:25.133591] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70794) is not found. Dropping the request. 00:14:00.767 [2024-07-15 13:55:25.133827] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70794) is not found. Dropping the request. 00:14:00.767 [2024-07-15 13:55:25.134068] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70794) is not found. Dropping the request. 00:14:00.767 [2024-07-15 13:55:25.136945] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70794) is not found. Dropping the request. 00:14:00.767 [2024-07-15 13:55:25.137205] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70794) is not found. Dropping the request. 00:14:00.767 [2024-07-15 13:55:25.137541] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70794) is not found. Dropping the request. 00:14:00.767 [2024-07-15 13:55:25.137777] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70794) is not found. Dropping the request. 00:14:01.025 13:55:25 nvme -- common/autotest_common.sh@1091 -- # rm -f /var/run/spdk_stub0 00:14:01.025 13:55:25 nvme -- common/autotest_common.sh@1095 -- # echo 2 00:14:01.025 13:55:25 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:14:01.025 13:55:25 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:01.025 13:55:25 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:01.025 13:55:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:01.025 ************************************ 00:14:01.025 START TEST bdev_nvme_reset_stuck_adm_cmd 00:14:01.025 ************************************ 00:14:01.025 13:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:14:01.025 * Looking for test storage... 00:14:01.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:01.025 13:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:14:01.025 13:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:14:01.025 13:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:14:01.025 13:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:14:01.025 13:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:14:01.025 13:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:14:01.025 13:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:14:01.025 13:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:14:01.025 13:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:14:01.025 13:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:14:01.025 13:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:14:01.025 13:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:14:01.025 13:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:01.025 13:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:01.025 13:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:14:01.283 13:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:14:01.283 13:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:01.283 13:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:14:01.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.283 13:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:14:01.283 13:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:14:01.283 13:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=71082 00:14:01.283 13:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:14:01.283 13:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:01.283 13:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 71082 00:14:01.283 13:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@829 -- # '[' -z 71082 ']' 00:14:01.283 13:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.283 13:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:01.283 13:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.283 13:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:01.283 13:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:01.283 [2024-07-15 13:55:25.760181] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:01.283 [2024-07-15 13:55:25.760356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71082 ] 00:14:01.540 [2024-07-15 13:55:25.960461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:01.797 [2024-07-15 13:55:26.258910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.797 [2024-07-15 13:55:26.258980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:01.797 [2024-07-15 13:55:26.259087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:01.797 [2024-07-15 13:55:26.259339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.728 13:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:02.728 13:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # return 0 00:14:02.728 13:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:14:02.728 13:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.728 13:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:02.728 nvme0n1 00:14:02.728 13:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.728 13:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:14:02.728 13:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_zmUcI.txt 00:14:02.728 13:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:14:02.728 13:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.728 13:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:02.728 true 00:14:02.728 13:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.728 13:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:14:02.728 13:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721051727 00:14:02.728 13:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=71110 00:14:02.728 13:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:02.728 13:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:14:02.728 13:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:05.250 [2024-07-15 13:55:29.244964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:14:05.250 [2024-07-15 13:55:29.245350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:05.250 [2024-07-15 13:55:29.245392] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:05.250 [2024-07-15 13:55:29.245416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.250 [2024-07-15 13:55:29.247663] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:05.250 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 71110 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 71110 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 71110 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_zmUcI.txt 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_zmUcI.txt 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 71082 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@948 -- # '[' -z 71082 ']' 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # kill -0 71082 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # uname 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71082 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71082' 00:14:05.250 killing process with pid 71082 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # kill 71082 00:14:05.250 13:55:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # wait 71082 00:14:07.778 13:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:14:07.778 13:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:14:07.778 ************************************ 00:14:07.778 END TEST bdev_nvme_reset_stuck_adm_cmd 00:14:07.778 ************************************ 00:14:07.778 00:14:07.778 real 0m6.306s 00:14:07.778 user 0m21.723s 00:14:07.778 sys 0m0.606s 00:14:07.778 13:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:07.778 13:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:07.778 13:55:31 nvme -- common/autotest_common.sh@1142 -- # return 0 00:14:07.778 13:55:31 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:14:07.778 13:55:31 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:14:07.778 13:55:31 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:07.778 13:55:31 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:07.778 13:55:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:07.778 ************************************ 00:14:07.778 START TEST nvme_fio 00:14:07.778 ************************************ 00:14:07.778 13:55:31 nvme.nvme_fio -- common/autotest_common.sh@1123 -- # nvme_fio_test 00:14:07.778 13:55:31 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:14:07.778 13:55:31 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:14:07.778 13:55:31 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:14:07.778 13:55:31 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:14:07.778 13:55:31 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:14:07.778 13:55:31 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:07.778 13:55:31 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:07.778 13:55:31 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:14:07.778 13:55:31 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:14:07.778 13:55:31 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:07.778 13:55:31 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:14:07.778 13:55:31 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:14:07.778 13:55:31 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:14:07.778 13:55:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:07.778 13:55:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:14:07.778 13:55:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:07.778 13:55:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:14:08.036 13:55:32 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:14:08.036 13:55:32 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:14:08.036 13:55:32 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:14:08.036 13:55:32 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:08.036 13:55:32 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:08.036 13:55:32 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:08.036 13:55:32 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:08.036 13:55:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:14:08.036 13:55:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:08.036 13:55:32 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:08.036 13:55:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:08.036 13:55:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:14:08.036 13:55:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:08.036 13:55:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:08.036 13:55:32 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:08.036 13:55:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:14:08.036 13:55:32 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:08.036 13:55:32 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:14:08.294 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:08.294 fio-3.35 00:14:08.294 Starting 1 thread 00:14:11.585 00:14:11.585 test: (groupid=0, jobs=1): err= 0: pid=71261: Mon Jul 15 13:55:35 2024 00:14:11.585 read: IOPS=14.1k, BW=55.2MiB/s (57.9MB/s)(110MiB/2001msec) 00:14:11.585 slat (usec): min=4, max=118, avg= 7.12, stdev= 2.55 00:14:11.585 clat (usec): min=493, max=8564, avg=4508.55, stdev=955.14 00:14:11.585 lat (usec): min=503, max=8614, avg=4515.67, stdev=956.34 00:14:11.585 clat percentiles (usec): 00:14:11.585 | 1.00th=[ 2835], 5.00th=[ 3458], 10.00th=[ 3621], 20.00th=[ 3818], 00:14:11.585 | 30.00th=[ 3982], 40.00th=[ 4228], 50.00th=[ 4359], 60.00th=[ 4490], 00:14:11.585 | 70.00th=[ 4621], 80.00th=[ 4883], 90.00th=[ 5866], 95.00th=[ 6718], 00:14:11.585 | 99.00th=[ 7570], 99.50th=[ 7767], 99.90th=[ 8094], 99.95th=[ 8225], 00:14:11.585 | 99.99th=[ 8455] 00:14:11.585 bw ( KiB/s): min=50312, max=61040, per=96.44%, avg=54488.00, stdev=5745.13, samples=3 00:14:11.585 iops : min=12578, max=15260, avg=13622.00, stdev=1436.28, samples=3 00:14:11.585 write: IOPS=14.1k, BW=55.2MiB/s (57.9MB/s)(110MiB/2001msec); 0 zone resets 00:14:11.585 slat (nsec): min=4835, max=42700, avg=7265.77, stdev=2516.33 00:14:11.585 clat (usec): min=430, max=8466, avg=4516.43, stdev=957.87 00:14:11.585 lat (usec): min=439, max=8484, avg=4523.69, stdev=959.08 00:14:11.585 clat percentiles (usec): 00:14:11.585 | 1.00th=[ 2868], 5.00th=[ 3458], 10.00th=[ 3654], 20.00th=[ 3818], 00:14:11.585 | 30.00th=[ 3982], 40.00th=[ 4228], 50.00th=[ 4359], 60.00th=[ 4490], 00:14:11.585 | 70.00th=[ 4621], 80.00th=[ 4883], 90.00th=[ 5932], 95.00th=[ 6783], 00:14:11.585 | 99.00th=[ 7635], 99.50th=[ 7832], 99.90th=[ 8094], 99.95th=[ 8225], 00:14:11.585 | 99.99th=[ 8356] 00:14:11.585 bw ( KiB/s): min=50744, max=61152, per=96.46%, avg=54533.33, stdev=5752.00, samples=3 00:14:11.585 iops : min=12686, max=15288, avg=13633.33, stdev=1438.00, samples=3 00:14:11.585 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:14:11.585 lat (msec) : 2=0.10%, 4=30.68%, 10=69.20% 00:14:11.585 cpu : usr=98.70%, sys=0.20%, ctx=5, majf=0, minf=606 00:14:11.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:14:11.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:11.586 issued rwts: total=28264,28282,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:11.586 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:11.586 00:14:11.586 Run status group 0 (all jobs): 00:14:11.586 READ: bw=55.2MiB/s (57.9MB/s), 55.2MiB/s-55.2MiB/s (57.9MB/s-57.9MB/s), io=110MiB (116MB), run=2001-2001msec 00:14:11.586 WRITE: bw=55.2MiB/s (57.9MB/s), 55.2MiB/s-55.2MiB/s (57.9MB/s-57.9MB/s), io=110MiB (116MB), run=2001-2001msec 00:14:11.586 ----------------------------------------------------- 00:14:11.586 Suppressions used: 00:14:11.586 count bytes template 00:14:11.586 1 32 /usr/src/fio/parse.c 00:14:11.586 1 8 libtcmalloc_minimal.so 00:14:11.586 ----------------------------------------------------- 00:14:11.586 00:14:11.586 13:55:35 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:14:11.586 13:55:35 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:14:11.586 13:55:35 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:14:11.586 13:55:35 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:14:11.586 13:55:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:14:11.586 13:55:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:14:11.843 13:55:36 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:14:11.843 13:55:36 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:14:11.843 13:55:36 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:14:11.843 13:55:36 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:11.843 13:55:36 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:11.843 13:55:36 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:11.843 13:55:36 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:11.843 13:55:36 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:14:11.843 13:55:36 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:11.843 13:55:36 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:11.843 13:55:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:11.843 13:55:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:14:11.843 13:55:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:11.843 13:55:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:11.843 13:55:36 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:11.843 13:55:36 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:14:11.843 13:55:36 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:11.843 13:55:36 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:14:12.101 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:12.101 fio-3.35 00:14:12.101 Starting 1 thread 00:14:15.381 00:14:15.381 test: (groupid=0, jobs=1): err= 0: pid=71316: Mon Jul 15 13:55:39 2024 00:14:15.381 read: IOPS=15.0k, BW=58.7MiB/s (61.6MB/s)(118MiB/2001msec) 00:14:15.381 slat (nsec): min=4668, max=53806, avg=6669.45, stdev=2361.86 00:14:15.381 clat (usec): min=307, max=8955, avg=4236.27, stdev=944.71 00:14:15.381 lat (usec): min=313, max=8991, avg=4242.94, stdev=946.06 00:14:15.381 clat percentiles (usec): 00:14:15.381 | 1.00th=[ 2769], 5.00th=[ 3326], 10.00th=[ 3458], 20.00th=[ 3556], 00:14:15.381 | 30.00th=[ 3687], 40.00th=[ 3818], 50.00th=[ 4015], 60.00th=[ 4228], 00:14:15.381 | 70.00th=[ 4359], 80.00th=[ 4555], 90.00th=[ 5604], 95.00th=[ 6587], 00:14:15.381 | 99.00th=[ 7308], 99.50th=[ 7504], 99.90th=[ 7832], 99.95th=[ 8094], 00:14:15.381 | 99.99th=[ 8717] 00:14:15.381 bw ( KiB/s): min=52264, max=67816, per=100.00%, avg=61872.00, stdev=8398.50, samples=3 00:14:15.381 iops : min=13066, max=16954, avg=15468.00, stdev=2099.63, samples=3 00:14:15.381 write: IOPS=15.0k, BW=58.7MiB/s (61.6MB/s)(118MiB/2001msec); 0 zone resets 00:14:15.381 slat (nsec): min=4748, max=58987, avg=6808.50, stdev=2418.11 00:14:15.381 clat (usec): min=386, max=8766, avg=4245.94, stdev=946.36 00:14:15.381 lat (usec): min=393, max=8785, avg=4252.75, stdev=947.71 00:14:15.381 clat percentiles (usec): 00:14:15.381 | 1.00th=[ 2769], 5.00th=[ 3359], 10.00th=[ 3458], 20.00th=[ 3589], 00:14:15.381 | 30.00th=[ 3687], 40.00th=[ 3818], 50.00th=[ 4015], 60.00th=[ 4228], 00:14:15.381 | 70.00th=[ 4359], 80.00th=[ 4621], 90.00th=[ 5604], 95.00th=[ 6587], 00:14:15.381 | 99.00th=[ 7308], 99.50th=[ 7504], 99.90th=[ 7963], 99.95th=[ 8160], 00:14:15.381 | 99.99th=[ 8455] 00:14:15.381 bw ( KiB/s): min=51000, max=67400, per=100.00%, avg=61461.33, stdev=9087.40, samples=3 00:14:15.381 iops : min=12750, max=16850, avg=15365.33, stdev=2271.85, samples=3 00:14:15.381 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:14:15.381 lat (msec) : 2=0.15%, 4=49.02%, 10=50.79% 00:14:15.381 cpu : usr=98.95%, sys=0.05%, ctx=3, majf=0, minf=605 00:14:15.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:14:15.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:15.381 issued rwts: total=30082,30091,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.381 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:15.381 00:14:15.381 Run status group 0 (all jobs): 00:14:15.381 READ: bw=58.7MiB/s (61.6MB/s), 58.7MiB/s-58.7MiB/s (61.6MB/s-61.6MB/s), io=118MiB (123MB), run=2001-2001msec 00:14:15.381 WRITE: bw=58.7MiB/s (61.6MB/s), 58.7MiB/s-58.7MiB/s (61.6MB/s-61.6MB/s), io=118MiB (123MB), run=2001-2001msec 00:14:15.381 ----------------------------------------------------- 00:14:15.381 Suppressions used: 00:14:15.381 count bytes template 00:14:15.381 1 32 /usr/src/fio/parse.c 00:14:15.381 1 8 libtcmalloc_minimal.so 00:14:15.381 ----------------------------------------------------- 00:14:15.381 00:14:15.381 13:55:39 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:14:15.381 13:55:39 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:14:15.381 13:55:39 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:14:15.381 13:55:39 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:14:15.639 13:55:40 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:14:15.639 13:55:40 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:14:15.896 13:55:40 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:14:15.896 13:55:40 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:14:15.896 13:55:40 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:14:15.896 13:55:40 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:15.896 13:55:40 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:15.896 13:55:40 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:15.896 13:55:40 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:15.896 13:55:40 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:14:15.896 13:55:40 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:15.896 13:55:40 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:15.896 13:55:40 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:14:15.896 13:55:40 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:15.896 13:55:40 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:15.896 13:55:40 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:15.896 13:55:40 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:15.896 13:55:40 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:14:15.896 13:55:40 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:15.896 13:55:40 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:14:16.154 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:16.154 fio-3.35 00:14:16.154 Starting 1 thread 00:14:19.434 00:14:19.434 test: (groupid=0, jobs=1): err= 0: pid=71377: Mon Jul 15 13:55:43 2024 00:14:19.434 read: IOPS=14.5k, BW=56.6MiB/s (59.3MB/s)(113MiB/2001msec) 00:14:19.434 slat (nsec): min=4700, max=44260, avg=6905.80, stdev=2249.57 00:14:19.434 clat (usec): min=380, max=10205, avg=4402.38, stdev=829.89 00:14:19.434 lat (usec): min=385, max=10211, avg=4409.29, stdev=830.77 00:14:19.434 clat percentiles (usec): 00:14:19.434 | 1.00th=[ 2671], 5.00th=[ 3294], 10.00th=[ 3458], 20.00th=[ 3785], 00:14:19.434 | 30.00th=[ 4146], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 00:14:19.434 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 5473], 95.00th=[ 6128], 00:14:19.434 | 99.00th=[ 6980], 99.50th=[ 7308], 99.90th=[ 8225], 99.95th=[ 9110], 00:14:19.434 | 99.99th=[10028] 00:14:19.434 bw ( KiB/s): min=54768, max=58392, per=96.78%, avg=56056.00, stdev=2026.59, samples=3 00:14:19.434 iops : min=13692, max=14598, avg=14014.00, stdev=506.65, samples=3 00:14:19.434 write: IOPS=14.5k, BW=56.6MiB/s (59.4MB/s)(113MiB/2001msec); 0 zone resets 00:14:19.434 slat (nsec): min=4807, max=39340, avg=7046.22, stdev=2261.00 00:14:19.434 clat (usec): min=369, max=10274, avg=4401.93, stdev=837.57 00:14:19.434 lat (usec): min=375, max=10280, avg=4408.98, stdev=838.48 00:14:19.434 clat percentiles (usec): 00:14:19.434 | 1.00th=[ 2638], 5.00th=[ 3294], 10.00th=[ 3458], 20.00th=[ 3752], 00:14:19.434 | 30.00th=[ 4146], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 00:14:19.434 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 5473], 95.00th=[ 6194], 00:14:19.434 | 99.00th=[ 6980], 99.50th=[ 7308], 99.90th=[ 8094], 99.95th=[ 8848], 00:14:19.434 | 99.99th=[10028] 00:14:19.434 bw ( KiB/s): min=54448, max=58280, per=96.61%, avg=56042.67, stdev=1995.20, samples=3 00:14:19.434 iops : min=13612, max=14570, avg=14010.67, stdev=498.80, samples=3 00:14:19.434 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.01% 00:14:19.434 lat (msec) : 2=0.18%, 4=24.02%, 10=75.75%, 20=0.01% 00:14:19.434 cpu : usr=98.60%, sys=0.35%, ctx=3, majf=0, minf=606 00:14:19.434 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:14:19.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:19.434 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:19.434 issued rwts: total=28974,29019,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:19.434 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:19.434 00:14:19.434 Run status group 0 (all jobs): 00:14:19.434 READ: bw=56.6MiB/s (59.3MB/s), 56.6MiB/s-56.6MiB/s (59.3MB/s-59.3MB/s), io=113MiB (119MB), run=2001-2001msec 00:14:19.434 WRITE: bw=56.6MiB/s (59.4MB/s), 56.6MiB/s-56.6MiB/s (59.4MB/s-59.4MB/s), io=113MiB (119MB), run=2001-2001msec 00:14:19.434 ----------------------------------------------------- 00:14:19.434 Suppressions used: 00:14:19.434 count bytes template 00:14:19.434 1 32 /usr/src/fio/parse.c 00:14:19.434 1 8 libtcmalloc_minimal.so 00:14:19.434 ----------------------------------------------------- 00:14:19.434 00:14:19.434 13:55:43 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:14:19.434 13:55:43 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:14:19.434 13:55:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:14:19.434 13:55:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:14:19.692 13:55:44 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:14:19.692 13:55:44 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:14:19.950 13:55:44 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:14:19.950 13:55:44 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:14:19.950 13:55:44 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:14:19.950 13:55:44 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:19.950 13:55:44 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:19.950 13:55:44 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:19.950 13:55:44 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:19.950 13:55:44 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:14:19.950 13:55:44 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:19.950 13:55:44 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:19.950 13:55:44 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:19.950 13:55:44 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:14:19.950 13:55:44 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:19.950 13:55:44 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:19.950 13:55:44 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:19.950 13:55:44 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:14:19.950 13:55:44 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:19.950 13:55:44 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:14:20.208 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:20.208 fio-3.35 00:14:20.208 Starting 1 thread 00:14:24.455 00:14:24.455 test: (groupid=0, jobs=1): err= 0: pid=71438: Mon Jul 15 13:55:48 2024 00:14:24.455 read: IOPS=15.2k, BW=59.6MiB/s (62.5MB/s)(119MiB/2001msec) 00:14:24.455 slat (nsec): min=4661, max=47108, avg=6584.61, stdev=1975.56 00:14:24.455 clat (usec): min=268, max=8388, avg=4174.80, stdev=616.74 00:14:24.455 lat (usec): min=273, max=8398, avg=4181.39, stdev=617.63 00:14:24.455 clat percentiles (usec): 00:14:24.455 | 1.00th=[ 2802], 5.00th=[ 3425], 10.00th=[ 3556], 20.00th=[ 3687], 00:14:24.455 | 30.00th=[ 3916], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4293], 00:14:24.455 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 5473], 00:14:24.455 | 99.00th=[ 6521], 99.50th=[ 7177], 99.90th=[ 7570], 99.95th=[ 7635], 00:14:24.455 | 99.99th=[ 7963] 00:14:24.455 bw ( KiB/s): min=58264, max=61152, per=98.33%, avg=59984.00, stdev=1521.07, samples=3 00:14:24.455 iops : min=14566, max=15288, avg=14996.00, stdev=380.27, samples=3 00:14:24.455 write: IOPS=15.3k, BW=59.7MiB/s (62.6MB/s)(119MiB/2001msec); 0 zone resets 00:14:24.455 slat (nsec): min=4789, max=53763, avg=6768.86, stdev=2053.33 00:14:24.455 clat (usec): min=277, max=8164, avg=4179.32, stdev=616.44 00:14:24.455 lat (usec): min=283, max=8172, avg=4186.09, stdev=617.33 00:14:24.455 clat percentiles (usec): 00:14:24.455 | 1.00th=[ 2835], 5.00th=[ 3425], 10.00th=[ 3556], 20.00th=[ 3687], 00:14:24.455 | 30.00th=[ 3916], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4293], 00:14:24.455 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 5473], 00:14:24.455 | 99.00th=[ 6390], 99.50th=[ 7177], 99.90th=[ 7570], 99.95th=[ 7635], 00:14:24.455 | 99.99th=[ 7832] 00:14:24.455 bw ( KiB/s): min=57432, max=60920, per=97.69%, avg=59680.00, stdev=1950.28, samples=3 00:14:24.455 iops : min=14358, max=15230, avg=14920.00, stdev=487.57, samples=3 00:14:24.455 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.01% 00:14:24.455 lat (msec) : 2=0.07%, 4=32.61%, 10=67.28% 00:14:24.455 cpu : usr=98.90%, sys=0.10%, ctx=4, majf=0, minf=603 00:14:24.455 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:14:24.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:24.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:24.455 issued rwts: total=30517,30562,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:24.455 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:24.455 00:14:24.455 Run status group 0 (all jobs): 00:14:24.455 READ: bw=59.6MiB/s (62.5MB/s), 59.6MiB/s-59.6MiB/s (62.5MB/s-62.5MB/s), io=119MiB (125MB), run=2001-2001msec 00:14:24.455 WRITE: bw=59.7MiB/s (62.6MB/s), 59.7MiB/s-59.7MiB/s (62.6MB/s-62.6MB/s), io=119MiB (125MB), run=2001-2001msec 00:14:24.455 ----------------------------------------------------- 00:14:24.455 Suppressions used: 00:14:24.455 count bytes template 00:14:24.455 1 32 /usr/src/fio/parse.c 00:14:24.455 1 8 libtcmalloc_minimal.so 00:14:24.455 ----------------------------------------------------- 00:14:24.455 00:14:24.455 13:55:48 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:14:24.455 13:55:48 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:14:24.455 00:14:24.455 real 0m16.937s 00:14:24.455 user 0m13.352s 00:14:24.455 sys 0m2.670s 00:14:24.455 13:55:48 nvme.nvme_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:24.455 13:55:48 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:14:24.455 ************************************ 00:14:24.455 END TEST nvme_fio 00:14:24.455 ************************************ 00:14:24.455 13:55:48 nvme -- common/autotest_common.sh@1142 -- # return 0 00:14:24.455 00:14:24.455 real 1m31.186s 00:14:24.455 user 3m46.298s 00:14:24.455 sys 0m14.546s 00:14:24.455 13:55:48 nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:24.455 13:55:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:24.455 ************************************ 00:14:24.455 END TEST nvme 00:14:24.455 ************************************ 00:14:24.455 13:55:48 -- common/autotest_common.sh@1142 -- # return 0 00:14:24.455 13:55:48 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:14:24.455 13:55:48 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:14:24.455 13:55:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:24.455 13:55:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:24.455 13:55:48 -- common/autotest_common.sh@10 -- # set +x 00:14:24.455 ************************************ 00:14:24.455 START TEST nvme_scc 00:14:24.455 ************************************ 00:14:24.455 13:55:48 nvme_scc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:14:24.455 * Looking for test storage... 00:14:24.455 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:24.455 13:55:48 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:14:24.455 13:55:48 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:14:24.455 13:55:48 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:14:24.455 13:55:48 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:24.455 13:55:48 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:24.455 13:55:48 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.455 13:55:48 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.455 13:55:48 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.455 13:55:48 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.455 13:55:48 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.455 13:55:48 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.455 13:55:48 nvme_scc -- paths/export.sh@5 -- # export PATH 00:14:24.455 13:55:48 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.455 13:55:48 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:14:24.455 13:55:48 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:14:24.455 13:55:48 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:14:24.455 13:55:48 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:14:24.455 13:55:48 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:14:24.455 13:55:48 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:14:24.455 13:55:48 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:14:24.455 13:55:48 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:14:24.455 13:55:48 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:14:24.455 13:55:48 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:24.455 13:55:48 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:14:24.455 13:55:48 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:14:24.455 13:55:48 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:14:24.455 13:55:48 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:25.022 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:25.022 Waiting for block devices as requested 00:14:25.022 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:25.297 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:25.297 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:25.297 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:30.567 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:30.567 13:55:54 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:14:30.567 13:55:54 nvme_scc -- scripts/common.sh@15 -- # local i 00:14:30.567 13:55:54 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:14:30.567 13:55:54 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:14:30.567 13:55:54 nvme_scc -- scripts/common.sh@24 -- # return 0 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.567 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.568 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.569 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:14:30.570 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:14:30.571 13:55:54 nvme_scc -- scripts/common.sh@15 -- # local i 00:14:30.571 13:55:54 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:14:30.571 13:55:54 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:14:30.571 13:55:54 nvme_scc -- scripts/common.sh@24 -- # return 0 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.571 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:14:30.572 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.573 13:55:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.573 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@18 -- # shift 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.574 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.575 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:14:30.576 13:55:55 nvme_scc -- scripts/common.sh@15 -- # local i 00:14:30.576 13:55:55 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:14:30.576 13:55:55 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:14:30.576 13:55:55 nvme_scc -- scripts/common.sh@24 -- # return 0 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@18 -- # shift 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:14:30.576 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.577 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.841 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.842 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@18 -- # shift 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:14:30.843 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@18 -- # shift 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:14:30.844 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:30.845 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@18 -- # shift 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.846 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:14:30.847 13:55:55 nvme_scc -- scripts/common.sh@15 -- # local i 00:14:30.847 13:55:55 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:14:30.847 13:55:55 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:14:30.847 13:55:55 nvme_scc -- scripts/common.sh@24 -- # return 0 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@18 -- # shift 00:14:30.847 13:55:55 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.848 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.849 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:14:30.850 13:55:55 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:14:30.850 13:55:55 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:14:30.851 13:55:55 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:14:30.851 13:55:55 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:14:30.851 13:55:55 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:14:30.851 13:55:55 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:31.416 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:31.979 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:31.979 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:31.980 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:31.980 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:32.236 13:55:56 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:14:32.236 13:55:56 nvme_scc -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:32.236 13:55:56 nvme_scc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:32.236 13:55:56 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:14:32.236 ************************************ 00:14:32.236 START TEST nvme_simple_copy 00:14:32.236 ************************************ 00:14:32.236 13:55:56 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:14:32.494 Initializing NVMe Controllers 00:14:32.494 Attaching to 0000:00:10.0 00:14:32.494 Controller supports SCC. Attached to 0000:00:10.0 00:14:32.494 Namespace ID: 1 size: 6GB 00:14:32.494 Initialization complete. 00:14:32.494 00:14:32.494 Controller QEMU NVMe Ctrl (12340 ) 00:14:32.494 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:14:32.494 Namespace Block Size:4096 00:14:32.494 Writing LBAs 0 to 63 with Random Data 00:14:32.494 Copied LBAs from 0 - 63 to the Destination LBA 256 00:14:32.494 LBAs matching Written Data: 64 00:14:32.494 00:14:32.494 real 0m0.306s 00:14:32.494 user 0m0.121s 00:14:32.494 sys 0m0.083s 00:14:32.494 13:55:56 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:32.494 13:55:56 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:14:32.494 ************************************ 00:14:32.494 END TEST nvme_simple_copy 00:14:32.494 ************************************ 00:14:32.494 13:55:56 nvme_scc -- common/autotest_common.sh@1142 -- # return 0 00:14:32.494 00:14:32.494 real 0m8.030s 00:14:32.494 user 0m1.349s 00:14:32.494 sys 0m1.665s 00:14:32.494 13:55:56 nvme_scc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:32.494 ************************************ 00:14:32.494 END TEST nvme_scc 00:14:32.494 ************************************ 00:14:32.494 13:55:56 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:14:32.494 13:55:56 -- common/autotest_common.sh@1142 -- # return 0 00:14:32.494 13:55:56 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:14:32.494 13:55:56 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:14:32.494 13:55:56 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:14:32.494 13:55:56 -- spdk/autotest.sh@232 -- # [[ 1 -eq 1 ]] 00:14:32.494 13:55:56 -- spdk/autotest.sh@233 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:14:32.494 13:55:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:32.494 13:55:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:32.494 13:55:56 -- common/autotest_common.sh@10 -- # set +x 00:14:32.494 ************************************ 00:14:32.495 START TEST nvme_fdp 00:14:32.495 ************************************ 00:14:32.495 13:55:56 nvme_fdp -- common/autotest_common.sh@1123 -- # test/nvme/nvme_fdp.sh 00:14:32.495 * Looking for test storage... 00:14:32.495 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:32.495 13:55:57 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:14:32.495 13:55:57 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:14:32.495 13:55:57 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:14:32.495 13:55:57 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:32.495 13:55:57 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:32.495 13:55:57 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:32.495 13:55:57 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:32.495 13:55:57 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:32.495 13:55:57 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.495 13:55:57 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.495 13:55:57 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.495 13:55:57 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:14:32.495 13:55:57 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.495 13:55:57 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:14:32.495 13:55:57 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:14:32.495 13:55:57 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:14:32.495 13:55:57 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:14:32.495 13:55:57 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:14:32.495 13:55:57 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:14:32.495 13:55:57 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:14:32.495 13:55:57 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:14:32.495 13:55:57 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:14:32.495 13:55:57 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:32.495 13:55:57 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:33.058 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:33.058 Waiting for block devices as requested 00:14:33.058 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:33.316 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:33.316 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:33.316 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:38.582 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:38.582 13:56:02 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:14:38.582 13:56:02 nvme_fdp -- scripts/common.sh@15 -- # local i 00:14:38.582 13:56:02 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:14:38.582 13:56:02 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:14:38.582 13:56:02 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.582 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:14:38.583 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.584 13:56:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.585 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.586 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:14:38.587 13:56:03 nvme_fdp -- scripts/common.sh@15 -- # local i 00:14:38.587 13:56:03 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:14:38.587 13:56:03 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:14:38.587 13:56:03 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.587 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:14:38.588 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:14:38.589 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:14:38.863 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.864 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:14:38.865 13:56:03 nvme_fdp -- scripts/common.sh@15 -- # local i 00:14:38.865 13:56:03 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:14:38.865 13:56:03 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:14:38.865 13:56:03 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.865 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.866 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:38.867 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:14:38.868 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.869 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:14:38.870 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.871 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:14:38.872 13:56:03 nvme_fdp -- scripts/common.sh@15 -- # local i 00:14:38.872 13:56:03 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:14:38.872 13:56:03 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:14:38.872 13:56:03 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:14:38.872 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:14:38.873 13:56:03 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:14:38.873 13:56:03 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:14:39.131 13:56:03 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:14:39.131 13:56:03 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:14:39.131 13:56:03 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:14:39.131 13:56:03 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:14:39.131 13:56:03 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:14:39.131 13:56:03 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:14:39.131 13:56:03 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:14:39.131 13:56:03 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:14:39.131 13:56:03 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:14:39.131 13:56:03 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:14:39.131 13:56:03 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:14:39.131 13:56:03 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:14:39.131 13:56:03 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:14:39.131 13:56:03 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:14:39.131 13:56:03 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:14:39.131 13:56:03 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:14:39.131 13:56:03 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:14:39.131 13:56:03 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:14:39.131 13:56:03 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:14:39.131 13:56:03 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:14:39.131 13:56:03 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:14:39.131 13:56:03 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:14:39.131 13:56:03 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:14:39.131 13:56:03 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:14:39.131 13:56:03 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:14:39.131 13:56:03 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:14:39.131 13:56:03 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:14:39.131 13:56:03 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:14:39.131 13:56:03 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:14:39.131 13:56:03 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:14:39.131 13:56:03 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:14:39.132 13:56:03 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:14:39.132 13:56:03 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:14:39.132 13:56:03 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:14:39.132 13:56:03 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:39.389 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:39.955 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:39.955 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:39.955 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:40.214 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:40.214 13:56:04 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:14:40.214 13:56:04 nvme_fdp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:40.214 13:56:04 nvme_fdp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:40.214 13:56:04 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:14:40.214 ************************************ 00:14:40.214 START TEST nvme_flexible_data_placement 00:14:40.214 ************************************ 00:14:40.214 13:56:04 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:14:40.473 Initializing NVMe Controllers 00:14:40.473 Attaching to 0000:00:13.0 00:14:40.473 Controller supports FDP Attached to 0000:00:13.0 00:14:40.473 Namespace ID: 1 Endurance Group ID: 1 00:14:40.473 Initialization complete. 00:14:40.473 00:14:40.473 ================================== 00:14:40.473 == FDP tests for Namespace: #01 == 00:14:40.473 ================================== 00:14:40.473 00:14:40.473 Get Feature: FDP: 00:14:40.473 ================= 00:14:40.473 Enabled: Yes 00:14:40.473 FDP configuration Index: 0 00:14:40.473 00:14:40.473 FDP configurations log page 00:14:40.473 =========================== 00:14:40.473 Number of FDP configurations: 1 00:14:40.473 Version: 0 00:14:40.473 Size: 112 00:14:40.473 FDP Configuration Descriptor: 0 00:14:40.473 Descriptor Size: 96 00:14:40.473 Reclaim Group Identifier format: 2 00:14:40.473 FDP Volatile Write Cache: Not Present 00:14:40.473 FDP Configuration: Valid 00:14:40.473 Vendor Specific Size: 0 00:14:40.473 Number of Reclaim Groups: 2 00:14:40.473 Number of Recalim Unit Handles: 8 00:14:40.473 Max Placement Identifiers: 128 00:14:40.473 Number of Namespaces Suppprted: 256 00:14:40.473 Reclaim unit Nominal Size: 6000000 bytes 00:14:40.473 Estimated Reclaim Unit Time Limit: Not Reported 00:14:40.473 RUH Desc #000: RUH Type: Initially Isolated 00:14:40.473 RUH Desc #001: RUH Type: Initially Isolated 00:14:40.473 RUH Desc #002: RUH Type: Initially Isolated 00:14:40.473 RUH Desc #003: RUH Type: Initially Isolated 00:14:40.473 RUH Desc #004: RUH Type: Initially Isolated 00:14:40.473 RUH Desc #005: RUH Type: Initially Isolated 00:14:40.473 RUH Desc #006: RUH Type: Initially Isolated 00:14:40.473 RUH Desc #007: RUH Type: Initially Isolated 00:14:40.473 00:14:40.473 FDP reclaim unit handle usage log page 00:14:40.473 ====================================== 00:14:40.473 Number of Reclaim Unit Handles: 8 00:14:40.473 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:14:40.473 RUH Usage Desc #001: RUH Attributes: Unused 00:14:40.473 RUH Usage Desc #002: RUH Attributes: Unused 00:14:40.473 RUH Usage Desc #003: RUH Attributes: Unused 00:14:40.473 RUH Usage Desc #004: RUH Attributes: Unused 00:14:40.473 RUH Usage Desc #005: RUH Attributes: Unused 00:14:40.473 RUH Usage Desc #006: RUH Attributes: Unused 00:14:40.473 RUH Usage Desc #007: RUH Attributes: Unused 00:14:40.473 00:14:40.473 FDP statistics log page 00:14:40.473 ======================= 00:14:40.473 Host bytes with metadata written: 786812928 00:14:40.473 Media bytes with metadata written: 786886656 00:14:40.473 Media bytes erased: 0 00:14:40.473 00:14:40.473 FDP Reclaim unit handle status 00:14:40.473 ============================== 00:14:40.473 Number of RUHS descriptors: 2 00:14:40.473 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000011a3 00:14:40.473 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:14:40.473 00:14:40.473 FDP write on placement id: 0 success 00:14:40.473 00:14:40.473 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:14:40.473 00:14:40.473 IO mgmt send: RUH update for Placement ID: #0 Success 00:14:40.473 00:14:40.473 Get Feature: FDP Events for Placement handle: #0 00:14:40.473 ======================== 00:14:40.473 Number of FDP Events: 6 00:14:40.473 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:14:40.473 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:14:40.473 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:14:40.473 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:14:40.473 FDP Event: #4 Type: Media Reallocated Enabled: No 00:14:40.473 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:14:40.473 00:14:40.473 FDP events log page 00:14:40.473 =================== 00:14:40.473 Number of FDP events: 1 00:14:40.473 FDP Event #0: 00:14:40.473 Event Type: RU Not Written to Capacity 00:14:40.473 Placement Identifier: Valid 00:14:40.473 NSID: Valid 00:14:40.473 Location: Valid 00:14:40.473 Placement Identifier: 0 00:14:40.473 Event Timestamp: 7 00:14:40.473 Namespace Identifier: 1 00:14:40.473 Reclaim Group Identifier: 0 00:14:40.473 Reclaim Unit Handle Identifier: 0 00:14:40.473 00:14:40.473 FDP test passed 00:14:40.473 00:14:40.473 real 0m0.278s 00:14:40.473 user 0m0.096s 00:14:40.473 sys 0m0.081s 00:14:40.473 13:56:04 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:40.473 13:56:04 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:14:40.473 ************************************ 00:14:40.473 END TEST nvme_flexible_data_placement 00:14:40.473 ************************************ 00:14:40.473 13:56:04 nvme_fdp -- common/autotest_common.sh@1142 -- # return 0 00:14:40.473 00:14:40.473 real 0m7.961s 00:14:40.473 user 0m1.278s 00:14:40.473 sys 0m1.609s 00:14:40.473 13:56:04 nvme_fdp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:40.473 13:56:04 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:14:40.473 ************************************ 00:14:40.473 END TEST nvme_fdp 00:14:40.473 ************************************ 00:14:40.473 13:56:04 -- common/autotest_common.sh@1142 -- # return 0 00:14:40.473 13:56:04 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:14:40.473 13:56:04 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:14:40.473 13:56:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:40.473 13:56:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:40.473 13:56:04 -- common/autotest_common.sh@10 -- # set +x 00:14:40.473 ************************************ 00:14:40.473 START TEST nvme_rpc 00:14:40.473 ************************************ 00:14:40.473 13:56:04 nvme_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:14:40.473 * Looking for test storage... 00:14:40.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:40.732 13:56:05 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:40.732 13:56:05 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:14:40.732 13:56:05 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:14:40.732 13:56:05 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:14:40.732 13:56:05 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:14:40.732 13:56:05 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:14:40.732 13:56:05 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:14:40.732 13:56:05 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:14:40.732 13:56:05 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:40.732 13:56:05 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:40.732 13:56:05 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:14:40.732 13:56:05 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:14:40.732 13:56:05 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:40.732 13:56:05 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:14:40.732 13:56:05 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:14:40.732 13:56:05 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=72770 00:14:40.732 13:56:05 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:14:40.732 13:56:05 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:14:40.732 13:56:05 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 72770 00:14:40.732 13:56:05 nvme_rpc -- common/autotest_common.sh@829 -- # '[' -z 72770 ']' 00:14:40.732 13:56:05 nvme_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.732 13:56:05 nvme_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:40.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.732 13:56:05 nvme_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.732 13:56:05 nvme_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:40.732 13:56:05 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:40.732 [2024-07-15 13:56:05.209355] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:40.732 [2024-07-15 13:56:05.209560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72770 ] 00:14:40.991 [2024-07-15 13:56:05.385733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:41.249 [2024-07-15 13:56:05.617509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.249 [2024-07-15 13:56:05.617513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.817 13:56:06 nvme_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:41.817 13:56:06 nvme_rpc -- common/autotest_common.sh@862 -- # return 0 00:14:41.817 13:56:06 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:14:42.384 Nvme0n1 00:14:42.384 13:56:06 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:14:42.384 13:56:06 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:14:42.642 request: 00:14:42.642 { 00:14:42.642 "bdev_name": "Nvme0n1", 00:14:42.642 "filename": "non_existing_file", 00:14:42.642 "method": "bdev_nvme_apply_firmware", 00:14:42.642 "req_id": 1 00:14:42.642 } 00:14:42.642 Got JSON-RPC error response 00:14:42.642 response: 00:14:42.642 { 00:14:42.642 "code": -32603, 00:14:42.642 "message": "open file failed." 00:14:42.642 } 00:14:42.642 13:56:07 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:14:42.642 13:56:07 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:14:42.642 13:56:07 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:14:42.900 13:56:07 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:42.900 13:56:07 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 72770 00:14:42.900 13:56:07 nvme_rpc -- common/autotest_common.sh@948 -- # '[' -z 72770 ']' 00:14:42.900 13:56:07 nvme_rpc -- common/autotest_common.sh@952 -- # kill -0 72770 00:14:42.900 13:56:07 nvme_rpc -- common/autotest_common.sh@953 -- # uname 00:14:42.900 13:56:07 nvme_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:42.900 13:56:07 nvme_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72770 00:14:42.900 13:56:07 nvme_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:42.900 13:56:07 nvme_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:42.900 killing process with pid 72770 00:14:42.900 13:56:07 nvme_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72770' 00:14:42.900 13:56:07 nvme_rpc -- common/autotest_common.sh@967 -- # kill 72770 00:14:42.900 13:56:07 nvme_rpc -- common/autotest_common.sh@972 -- # wait 72770 00:14:45.432 00:14:45.432 real 0m4.494s 00:14:45.432 user 0m8.609s 00:14:45.432 sys 0m0.625s 00:14:45.432 13:56:09 nvme_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:45.432 ************************************ 00:14:45.432 END TEST nvme_rpc 00:14:45.432 13:56:09 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.432 ************************************ 00:14:45.432 13:56:09 -- common/autotest_common.sh@1142 -- # return 0 00:14:45.432 13:56:09 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:14:45.432 13:56:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:45.432 13:56:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:45.432 13:56:09 -- common/autotest_common.sh@10 -- # set +x 00:14:45.432 ************************************ 00:14:45.432 START TEST nvme_rpc_timeouts 00:14:45.432 ************************************ 00:14:45.432 13:56:09 nvme_rpc_timeouts -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:14:45.432 * Looking for test storage... 00:14:45.432 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:45.432 13:56:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:45.432 13:56:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_72846 00:14:45.432 13:56:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_72846 00:14:45.432 13:56:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=72870 00:14:45.432 13:56:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:14:45.432 13:56:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:14:45.432 13:56:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 72870 00:14:45.432 13:56:09 nvme_rpc_timeouts -- common/autotest_common.sh@829 -- # '[' -z 72870 ']' 00:14:45.432 13:56:09 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.432 13:56:09 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:45.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.432 13:56:09 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.432 13:56:09 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:45.432 13:56:09 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:14:45.432 [2024-07-15 13:56:09.690826] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:45.432 [2024-07-15 13:56:09.691040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72870 ] 00:14:45.432 [2024-07-15 13:56:09.873835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:45.690 [2024-07-15 13:56:10.122569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.690 [2024-07-15 13:56:10.122572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.623 13:56:10 nvme_rpc_timeouts -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:46.623 Checking default timeout settings: 00:14:46.623 13:56:10 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # return 0 00:14:46.623 13:56:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:14:46.623 13:56:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:46.881 Making settings changes with rpc: 00:14:46.881 13:56:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:14:46.881 13:56:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:14:47.139 Check default vs. modified settings: 00:14:47.139 13:56:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:14:47.139 13:56:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_72846 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_72846 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:14:47.705 Setting action_on_timeout is changed as expected. 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_72846 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_72846 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:14:47.705 Setting timeout_us is changed as expected. 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_72846 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_72846 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:14:47.705 Setting timeout_admin_us is changed as expected. 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_72846 /tmp/settings_modified_72846 00:14:47.705 13:56:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 72870 00:14:47.705 13:56:12 nvme_rpc_timeouts -- common/autotest_common.sh@948 -- # '[' -z 72870 ']' 00:14:47.705 13:56:12 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # kill -0 72870 00:14:47.705 13:56:12 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # uname 00:14:47.705 13:56:12 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:47.705 13:56:12 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72870 00:14:47.705 13:56:12 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:47.705 killing process with pid 72870 00:14:47.705 13:56:12 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:47.705 13:56:12 nvme_rpc_timeouts -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72870' 00:14:47.705 13:56:12 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # kill 72870 00:14:47.705 13:56:12 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # wait 72870 00:14:50.236 RPC TIMEOUT SETTING TEST PASSED. 00:14:50.236 13:56:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:14:50.236 ************************************ 00:14:50.236 END TEST nvme_rpc_timeouts 00:14:50.236 ************************************ 00:14:50.236 00:14:50.236 real 0m4.858s 00:14:50.236 user 0m9.348s 00:14:50.236 sys 0m0.631s 00:14:50.236 13:56:14 nvme_rpc_timeouts -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:50.236 13:56:14 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:14:50.236 13:56:14 -- common/autotest_common.sh@1142 -- # return 0 00:14:50.236 13:56:14 -- spdk/autotest.sh@243 -- # uname -s 00:14:50.236 13:56:14 -- spdk/autotest.sh@243 -- # '[' Linux = Linux ']' 00:14:50.236 13:56:14 -- spdk/autotest.sh@244 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:14:50.236 13:56:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:50.236 13:56:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:50.236 13:56:14 -- common/autotest_common.sh@10 -- # set +x 00:14:50.236 ************************************ 00:14:50.236 START TEST sw_hotplug 00:14:50.236 ************************************ 00:14:50.236 13:56:14 sw_hotplug -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:14:50.236 * Looking for test storage... 00:14:50.236 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:50.236 13:56:14 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:50.236 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:50.495 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:50.495 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:50.495 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:50.495 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:50.495 13:56:14 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:14:50.495 13:56:14 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:14:50.495 13:56:14 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:14:50.495 13:56:14 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@230 -- # local class 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@15 -- # local i 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@15 -- # local i 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@15 -- # local i 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@15 -- # local i 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:14:50.495 13:56:14 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:50.495 13:56:14 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:14:50.495 13:56:14 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:14:50.495 13:56:14 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:50.754 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:51.011 Waiting for block devices as requested 00:14:51.011 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:51.011 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:51.011 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:51.269 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:56.527 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:56.527 13:56:20 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:14:56.527 13:56:20 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:56.785 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:14:56.785 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:56.785 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:14:57.042 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:14:57.299 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:57.299 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:57.299 13:56:21 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:14:57.299 13:56:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:57.556 13:56:21 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:14:57.556 13:56:21 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:14:57.556 13:56:21 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=73729 00:14:57.556 13:56:21 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:14:57.556 13:56:21 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:57.556 13:56:21 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:14:57.556 13:56:21 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:14:57.556 13:56:21 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:14:57.556 13:56:21 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:14:57.556 13:56:21 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:14:57.556 13:56:21 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:14:57.556 13:56:21 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 false 00:14:57.556 13:56:21 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:57.556 13:56:21 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:57.556 13:56:21 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:14:57.556 13:56:21 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:57.556 13:56:21 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:57.556 Initializing NVMe Controllers 00:14:57.556 Attaching to 0000:00:10.0 00:14:57.556 Attaching to 0000:00:11.0 00:14:57.556 Attached to 0000:00:10.0 00:14:57.556 Attached to 0000:00:11.0 00:14:57.556 Initialization complete. Starting I/O... 00:14:57.556 QEMU NVMe Ctrl (12340 ): 21 I/Os completed (+21) 00:14:57.556 QEMU NVMe Ctrl (12341 ): 24 I/Os completed (+24) 00:14:57.556 00:14:58.928 QEMU NVMe Ctrl (12340 ): 1231 I/Os completed (+1210) 00:14:58.928 QEMU NVMe Ctrl (12341 ): 2078 I/Os completed (+2054) 00:14:58.928 00:14:59.860 QEMU NVMe Ctrl (12340 ): 2419 I/Os completed (+1188) 00:14:59.860 QEMU NVMe Ctrl (12341 ): 3504 I/Os completed (+1426) 00:14:59.860 00:15:00.834 QEMU NVMe Ctrl (12340 ): 5541 I/Os completed (+3122) 00:15:00.834 QEMU NVMe Ctrl (12341 ): 7523 I/Os completed (+4019) 00:15:00.834 00:15:01.766 QEMU NVMe Ctrl (12340 ): 7712 I/Os completed (+2171) 00:15:01.766 QEMU NVMe Ctrl (12341 ): 10333 I/Os completed (+2810) 00:15:01.766 00:15:02.701 QEMU NVMe Ctrl (12340 ): 9242 I/Os completed (+1530) 00:15:02.701 QEMU NVMe Ctrl (12341 ): 12362 I/Os completed (+2029) 00:15:02.701 00:15:03.635 13:56:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:03.635 13:56:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:03.635 13:56:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:03.635 [2024-07-15 13:56:27.863620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:15:03.635 Controller removed: QEMU NVMe Ctrl (12340 ) 00:15:03.635 [2024-07-15 13:56:27.865571] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.635 [2024-07-15 13:56:27.865646] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.635 [2024-07-15 13:56:27.865681] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.635 [2024-07-15 13:56:27.865708] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.635 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:15:03.635 [2024-07-15 13:56:27.869386] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.635 [2024-07-15 13:56:27.869453] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.635 [2024-07-15 13:56:27.869478] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.635 [2024-07-15 13:56:27.869500] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.635 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:15:03.635 EAL: Scan for (pci) bus failed. 00:15:03.635 13:56:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:03.635 13:56:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:03.635 [2024-07-15 13:56:27.900371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:15:03.635 Controller removed: QEMU NVMe Ctrl (12341 ) 00:15:03.635 [2024-07-15 13:56:27.903006] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.635 [2024-07-15 13:56:27.903093] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.635 [2024-07-15 13:56:27.903150] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.635 [2024-07-15 13:56:27.903184] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.635 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:15:03.636 [2024-07-15 13:56:27.906918] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.636 [2024-07-15 13:56:27.906991] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.636 [2024-07-15 13:56:27.907032] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.636 [2024-07-15 13:56:27.907062] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.636 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:15:03.636 EAL: Scan for (pci) bus failed. 00:15:03.636 13:56:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:15:03.636 13:56:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:03.636 13:56:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:03.636 13:56:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:03.636 13:56:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:03.636 00:15:03.636 13:56:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:03.636 13:56:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:03.636 13:56:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:03.636 13:56:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:03.636 13:56:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:03.636 Attaching to 0000:00:10.0 00:15:03.636 Attached to 0000:00:10.0 00:15:03.893 13:56:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:03.893 13:56:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:03.893 13:56:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:03.893 Attaching to 0000:00:11.0 00:15:03.893 Attached to 0000:00:11.0 00:15:04.826 QEMU NVMe Ctrl (12340 ): 1669 I/Os completed (+1669) 00:15:04.826 QEMU NVMe Ctrl (12341 ): 2555 I/Os completed (+2555) 00:15:04.826 00:15:05.760 QEMU NVMe Ctrl (12340 ): 3887 I/Os completed (+2218) 00:15:05.760 QEMU NVMe Ctrl (12341 ): 4976 I/Os completed (+2421) 00:15:05.760 00:15:06.693 QEMU NVMe Ctrl (12340 ): 5403 I/Os completed (+1516) 00:15:06.693 QEMU NVMe Ctrl (12341 ): 6661 I/Os completed (+1685) 00:15:06.693 00:15:07.628 QEMU NVMe Ctrl (12340 ): 6994 I/Os completed (+1591) 00:15:07.628 QEMU NVMe Ctrl (12341 ): 8617 I/Os completed (+1956) 00:15:07.628 00:15:08.561 QEMU NVMe Ctrl (12340 ): 8473 I/Os completed (+1479) 00:15:08.561 QEMU NVMe Ctrl (12341 ): 10256 I/Os completed (+1639) 00:15:08.561 00:15:09.931 QEMU NVMe Ctrl (12340 ): 10114 I/Os completed (+1641) 00:15:09.931 QEMU NVMe Ctrl (12341 ): 12390 I/Os completed (+2134) 00:15:09.931 00:15:10.863 QEMU NVMe Ctrl (12340 ): 11747 I/Os completed (+1633) 00:15:10.863 QEMU NVMe Ctrl (12341 ): 14529 I/Os completed (+2139) 00:15:10.863 00:15:11.795 QEMU NVMe Ctrl (12340 ): 13496 I/Os completed (+1749) 00:15:11.795 QEMU NVMe Ctrl (12341 ): 16873 I/Os completed (+2344) 00:15:11.795 00:15:12.728 QEMU NVMe Ctrl (12340 ): 15021 I/Os completed (+1525) 00:15:12.728 QEMU NVMe Ctrl (12341 ): 19352 I/Os completed (+2479) 00:15:12.728 00:15:13.659 QEMU NVMe Ctrl (12340 ): 16717 I/Os completed (+1696) 00:15:13.659 QEMU NVMe Ctrl (12341 ): 21299 I/Os completed (+1947) 00:15:13.659 00:15:14.591 QEMU NVMe Ctrl (12340 ): 18543 I/Os completed (+1826) 00:15:14.591 QEMU NVMe Ctrl (12341 ): 23491 I/Os completed (+2192) 00:15:14.591 00:15:15.965 QEMU NVMe Ctrl (12340 ): 20228 I/Os completed (+1685) 00:15:15.965 QEMU NVMe Ctrl (12341 ): 25739 I/Os completed (+2248) 00:15:15.965 00:15:15.965 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:15:15.965 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:15.965 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:15.965 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:15.965 [2024-07-15 13:56:40.284046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:15:15.965 Controller removed: QEMU NVMe Ctrl (12340 ) 00:15:15.965 [2024-07-15 13:56:40.287448] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.965 [2024-07-15 13:56:40.287555] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.965 [2024-07-15 13:56:40.287603] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.965 [2024-07-15 13:56:40.287644] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.965 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:15:15.965 [2024-07-15 13:56:40.292572] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.965 [2024-07-15 13:56:40.292673] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.965 [2024-07-15 13:56:40.292718] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.965 [2024-07-15 13:56:40.292762] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.965 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:15.965 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:15.965 [2024-07-15 13:56:40.324412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:15:15.965 Controller removed: QEMU NVMe Ctrl (12341 ) 00:15:15.965 [2024-07-15 13:56:40.327489] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.965 [2024-07-15 13:56:40.327586] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.965 [2024-07-15 13:56:40.327637] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.965 [2024-07-15 13:56:40.327678] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.965 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:15:15.965 [2024-07-15 13:56:40.331937] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.965 [2024-07-15 13:56:40.332030] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.965 [2024-07-15 13:56:40.332079] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.965 [2024-07-15 13:56:40.332123] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.965 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:15:15.965 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:15.965 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:15.965 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:15.965 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:16.223 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:16.223 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:16.224 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:16.224 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:16.224 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:16.224 Attaching to 0000:00:10.0 00:15:16.224 Attached to 0000:00:10.0 00:15:16.224 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:16.224 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:16.224 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:16.224 Attaching to 0000:00:11.0 00:15:16.224 Attached to 0000:00:11.0 00:15:16.789 QEMU NVMe Ctrl (12340 ): 1351 I/Os completed (+1351) 00:15:16.789 QEMU NVMe Ctrl (12341 ): 1037 I/Os completed (+1037) 00:15:16.789 00:15:17.722 QEMU NVMe Ctrl (12340 ): 4089 I/Os completed (+2738) 00:15:17.722 QEMU NVMe Ctrl (12341 ): 5225 I/Os completed (+4188) 00:15:17.722 00:15:18.664 QEMU NVMe Ctrl (12340 ): 5797 I/Os completed (+1708) 00:15:18.664 QEMU NVMe Ctrl (12341 ): 7278 I/Os completed (+2053) 00:15:18.664 00:15:19.618 QEMU NVMe Ctrl (12340 ): 7492 I/Os completed (+1695) 00:15:19.619 QEMU NVMe Ctrl (12341 ): 9221 I/Os completed (+1943) 00:15:19.619 00:15:20.551 QEMU NVMe Ctrl (12340 ): 9122 I/Os completed (+1630) 00:15:20.551 QEMU NVMe Ctrl (12341 ): 11023 I/Os completed (+1802) 00:15:20.551 00:15:21.925 QEMU NVMe Ctrl (12340 ): 10804 I/Os completed (+1682) 00:15:21.925 QEMU NVMe Ctrl (12341 ): 13010 I/Os completed (+1987) 00:15:21.925 00:15:22.858 QEMU NVMe Ctrl (12340 ): 12445 I/Os completed (+1641) 00:15:22.858 QEMU NVMe Ctrl (12341 ): 14801 I/Os completed (+1791) 00:15:22.858 00:15:23.791 QEMU NVMe Ctrl (12340 ): 14069 I/Os completed (+1624) 00:15:23.791 QEMU NVMe Ctrl (12341 ): 16810 I/Os completed (+2009) 00:15:23.791 00:15:24.724 QEMU NVMe Ctrl (12340 ): 16178 I/Os completed (+2109) 00:15:24.724 QEMU NVMe Ctrl (12341 ): 20173 I/Os completed (+3363) 00:15:24.724 00:15:25.657 QEMU NVMe Ctrl (12340 ): 18045 I/Os completed (+1867) 00:15:25.657 QEMU NVMe Ctrl (12341 ): 22364 I/Os completed (+2191) 00:15:25.657 00:15:26.589 QEMU NVMe Ctrl (12340 ): 19669 I/Os completed (+1624) 00:15:26.589 QEMU NVMe Ctrl (12341 ): 24225 I/Os completed (+1861) 00:15:26.589 00:15:27.964 QEMU NVMe Ctrl (12340 ): 21591 I/Os completed (+1922) 00:15:27.964 QEMU NVMe Ctrl (12341 ): 26354 I/Os completed (+2129) 00:15:27.964 00:15:28.222 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:15:28.222 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:28.222 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:28.222 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:28.222 [2024-07-15 13:56:52.632039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:15:28.222 Controller removed: QEMU NVMe Ctrl (12340 ) 00:15:28.222 [2024-07-15 13:56:52.635199] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:28.222 [2024-07-15 13:56:52.635298] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:28.222 [2024-07-15 13:56:52.635360] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:28.222 [2024-07-15 13:56:52.635399] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:28.222 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:15:28.222 [2024-07-15 13:56:52.639993] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:28.222 [2024-07-15 13:56:52.640080] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:28.222 [2024-07-15 13:56:52.640119] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:28.222 [2024-07-15 13:56:52.640155] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:28.222 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:28.222 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:28.222 [2024-07-15 13:56:52.663807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:15:28.222 Controller removed: QEMU NVMe Ctrl (12341 ) 00:15:28.222 [2024-07-15 13:56:52.666865] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:28.222 [2024-07-15 13:56:52.666963] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:28.222 [2024-07-15 13:56:52.667009] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:28.222 [2024-07-15 13:56:52.667043] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:28.222 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:15:28.222 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:15:28.222 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:28.222 [2024-07-15 13:56:52.671113] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:28.222 [2024-07-15 13:56:52.671188] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:28.222 [2024-07-15 13:56:52.671231] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:28.222 [2024-07-15 13:56:52.671264] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:28.479 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:28.479 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:28.479 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:28.479 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:28.480 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:28.480 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:28.480 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:28.480 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:28.480 Attaching to 0000:00:10.0 00:15:28.480 Attached to 0000:00:10.0 00:15:28.480 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:28.480 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:28.480 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:28.480 Attaching to 0000:00:11.0 00:15:28.480 Attached to 0000:00:11.0 00:15:28.480 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:15:28.480 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:15:28.480 [2024-07-15 13:56:52.983336] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:15:40.716 13:57:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:15:40.716 13:57:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:40.716 13:57:04 sw_hotplug -- common/autotest_common.sh@715 -- # time=43.10 00:15:40.716 13:57:04 sw_hotplug -- common/autotest_common.sh@716 -- # echo 43.10 00:15:40.716 13:57:04 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:15:40.716 13:57:04 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.10 00:15:40.716 13:57:04 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.10 2 00:15:40.716 remove_attach_helper took 43.10s to complete (handling 2 nvme drive(s)) 13:57:04 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:15:47.277 13:57:10 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 73729 00:15:47.277 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (73729) - No such process 00:15:47.277 13:57:10 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 73729 00:15:47.277 13:57:10 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:15:47.277 13:57:10 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:15:47.277 13:57:10 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:15:47.277 13:57:10 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=74271 00:15:47.277 13:57:10 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:47.277 13:57:10 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:15:47.277 13:57:10 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 74271 00:15:47.277 13:57:10 sw_hotplug -- common/autotest_common.sh@829 -- # '[' -z 74271 ']' 00:15:47.277 13:57:10 sw_hotplug -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.277 13:57:10 sw_hotplug -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:47.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.277 13:57:10 sw_hotplug -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.277 13:57:10 sw_hotplug -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:47.277 13:57:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:47.277 [2024-07-15 13:57:11.118641] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:15:47.277 [2024-07-15 13:57:11.118909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74271 ] 00:15:47.277 [2024-07-15 13:57:11.291017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.277 [2024-07-15 13:57:11.522017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.843 13:57:12 sw_hotplug -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:47.843 13:57:12 sw_hotplug -- common/autotest_common.sh@862 -- # return 0 00:15:47.843 13:57:12 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:15:47.843 13:57:12 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.843 13:57:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:47.843 13:57:12 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.843 13:57:12 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:15:47.843 13:57:12 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:15:47.843 13:57:12 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:15:47.843 13:57:12 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:15:47.843 13:57:12 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:15:47.843 13:57:12 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:15:47.843 13:57:12 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:15:47.844 13:57:12 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:15:47.844 13:57:12 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:15:47.844 13:57:12 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:15:47.844 13:57:12 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:15:47.844 13:57:12 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:15:47.844 13:57:12 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:15:54.456 13:57:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:54.456 13:57:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:54.456 13:57:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:54.456 13:57:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:54.456 13:57:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:54.456 13:57:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:54.456 13:57:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:54.456 13:57:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:54.456 13:57:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:54.456 13:57:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:54.456 13:57:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:54.456 13:57:18 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.456 13:57:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:54.456 [2024-07-15 13:57:18.461374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:15:54.456 [2024-07-15 13:57:18.464413] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.456 [2024-07-15 13:57:18.464465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:54.456 [2024-07-15 13:57:18.464504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.456 [2024-07-15 13:57:18.464533] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.456 [2024-07-15 13:57:18.464554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:54.456 [2024-07-15 13:57:18.464569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.456 [2024-07-15 13:57:18.464587] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.456 [2024-07-15 13:57:18.464601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:54.456 [2024-07-15 13:57:18.464616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.456 [2024-07-15 13:57:18.464631] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.456 [2024-07-15 13:57:18.464648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:54.456 [2024-07-15 13:57:18.464667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.456 13:57:18 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.456 13:57:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:54.456 13:57:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:54.456 [2024-07-15 13:57:18.861386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:15:54.456 [2024-07-15 13:57:18.864582] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.456 [2024-07-15 13:57:18.864644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:54.456 [2024-07-15 13:57:18.864668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.456 [2024-07-15 13:57:18.864699] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.456 [2024-07-15 13:57:18.864715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:54.456 [2024-07-15 13:57:18.864732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.456 [2024-07-15 13:57:18.864747] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.456 [2024-07-15 13:57:18.864763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:54.456 [2024-07-15 13:57:18.864777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.456 [2024-07-15 13:57:18.864794] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.456 [2024-07-15 13:57:18.864808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:54.456 [2024-07-15 13:57:18.864824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.456 13:57:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:54.456 13:57:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:54.456 13:57:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:54.456 13:57:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:54.456 13:57:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:54.456 13:57:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:54.456 13:57:18 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.456 13:57:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:54.456 13:57:18 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.713 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:54.713 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:54.713 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:54.713 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:54.713 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:54.970 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:54.970 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:54.970 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:54.970 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:54.970 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:54.970 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:54.970 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:54.970 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:07.164 13:57:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:07.164 13:57:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:07.164 13:57:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:07.164 13:57:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:07.164 13:57:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:07.164 13:57:31 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.164 13:57:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:07.164 13:57:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:07.164 13:57:31 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.164 13:57:31 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:07.164 13:57:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:07.164 13:57:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:07.164 13:57:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:07.164 [2024-07-15 13:57:31.461657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:16:07.164 [2024-07-15 13:57:31.465486] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.164 [2024-07-15 13:57:31.465535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.164 [2024-07-15 13:57:31.465561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.164 [2024-07-15 13:57:31.465617] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.164 [2024-07-15 13:57:31.465646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.164 [2024-07-15 13:57:31.465662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.164 [2024-07-15 13:57:31.465681] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.164 [2024-07-15 13:57:31.465696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.164 [2024-07-15 13:57:31.465712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.164 [2024-07-15 13:57:31.465727] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.164 [2024-07-15 13:57:31.465743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.164 [2024-07-15 13:57:31.465758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.164 13:57:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:07.164 13:57:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:07.164 13:57:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:07.164 13:57:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:07.164 13:57:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:07.164 13:57:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:07.164 13:57:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:07.164 13:57:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:07.164 13:57:31 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.164 13:57:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:07.164 13:57:31 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.164 13:57:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:16:07.164 13:57:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:07.422 [2024-07-15 13:57:31.861625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:16:07.422 [2024-07-15 13:57:31.864487] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.422 [2024-07-15 13:57:31.864545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.422 [2024-07-15 13:57:31.864568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.422 [2024-07-15 13:57:31.864600] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.422 [2024-07-15 13:57:31.864616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.422 [2024-07-15 13:57:31.864669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.422 [2024-07-15 13:57:31.864686] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.422 [2024-07-15 13:57:31.864703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.422 [2024-07-15 13:57:31.864717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.422 [2024-07-15 13:57:31.864736] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.422 [2024-07-15 13:57:31.864751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.422 [2024-07-15 13:57:31.864767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.680 13:57:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:16:07.680 13:57:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:07.680 13:57:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:07.680 13:57:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:07.680 13:57:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:07.680 13:57:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:07.680 13:57:32 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.680 13:57:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:07.680 13:57:32 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.680 13:57:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:07.680 13:57:32 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:07.680 13:57:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:07.680 13:57:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:07.680 13:57:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:07.938 13:57:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:07.938 13:57:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:07.938 13:57:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:07.938 13:57:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:07.938 13:57:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:07.938 13:57:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:07.938 13:57:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:07.938 13:57:32 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:20.134 13:57:44 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:20.134 13:57:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:20.134 13:57:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:20.134 13:57:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:20.134 13:57:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:20.134 13:57:44 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.134 13:57:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:20.134 13:57:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:20.134 13:57:44 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.134 13:57:44 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:20.134 13:57:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:20.134 13:57:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:20.134 13:57:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:20.134 13:57:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:20.134 13:57:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:20.134 13:57:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:20.134 13:57:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:20.134 13:57:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:20.134 13:57:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:20.134 13:57:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:20.134 13:57:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:20.134 13:57:44 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.134 13:57:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:20.134 [2024-07-15 13:57:44.561866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:16:20.134 13:57:44 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.134 [2024-07-15 13:57:44.564835] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:20.134 [2024-07-15 13:57:44.564887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:20.134 [2024-07-15 13:57:44.564912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.134 [2024-07-15 13:57:44.564941] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:20.134 [2024-07-15 13:57:44.564959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:20.134 [2024-07-15 13:57:44.564974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.134 [2024-07-15 13:57:44.564995] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:20.134 [2024-07-15 13:57:44.565009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:20.134 [2024-07-15 13:57:44.565025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.134 [2024-07-15 13:57:44.565039] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:20.134 [2024-07-15 13:57:44.565055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:20.134 [2024-07-15 13:57:44.565069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.134 13:57:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:16:20.134 13:57:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:20.700 [2024-07-15 13:57:44.961856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:16:20.700 [2024-07-15 13:57:44.965289] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:20.700 [2024-07-15 13:57:44.965360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:20.700 [2024-07-15 13:57:44.965384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.700 [2024-07-15 13:57:44.965428] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:20.700 [2024-07-15 13:57:44.965447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:20.700 [2024-07-15 13:57:44.965468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.700 [2024-07-15 13:57:44.965485] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:20.700 [2024-07-15 13:57:44.965504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:20.700 [2024-07-15 13:57:44.965519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.700 [2024-07-15 13:57:44.965546] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:20.700 [2024-07-15 13:57:44.965561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:20.700 [2024-07-15 13:57:44.965580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.700 13:57:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:16:20.700 13:57:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:20.700 13:57:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:20.700 13:57:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:20.700 13:57:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:20.700 13:57:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:20.700 13:57:45 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.700 13:57:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:20.700 13:57:45 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.700 13:57:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:20.700 13:57:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:20.700 13:57:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:20.700 13:57:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:20.700 13:57:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:20.958 13:57:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:20.958 13:57:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:20.958 13:57:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:20.958 13:57:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:20.958 13:57:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:20.958 13:57:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:20.958 13:57:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:20.958 13:57:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:33.153 13:57:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:33.153 13:57:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:33.153 13:57:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:33.153 13:57:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:33.153 13:57:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:33.153 13:57:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:33.153 13:57:57 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.153 13:57:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:33.153 13:57:57 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.153 13:57:57 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:33.153 13:57:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:33.153 13:57:57 sw_hotplug -- common/autotest_common.sh@715 -- # time=45.13 00:16:33.153 13:57:57 sw_hotplug -- common/autotest_common.sh@716 -- # echo 45.13 00:16:33.153 13:57:57 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:16:33.153 13:57:57 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.13 00:16:33.153 13:57:57 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.13 2 00:16:33.153 remove_attach_helper took 45.13s to complete (handling 2 nvme drive(s)) 13:57:57 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:16:33.153 13:57:57 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.153 13:57:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:33.153 13:57:57 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.153 13:57:57 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:16:33.153 13:57:57 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.153 13:57:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:33.153 13:57:57 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.153 13:57:57 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:16:33.153 13:57:57 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:16:33.153 13:57:57 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:16:33.153 13:57:57 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:16:33.153 13:57:57 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:16:33.153 13:57:57 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:16:33.153 13:57:57 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:16:33.153 13:57:57 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:16:33.153 13:57:57 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:16:33.153 13:57:57 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:16:33.153 13:57:57 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:16:33.153 13:57:57 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:16:33.153 13:57:57 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:16:39.711 13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:39.711 13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:39.711 13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:39.711 13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:39.711 13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:39.711 13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:39.711 13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:39.711 13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:39.711 13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:39.711 13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:39.711 13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:39.711 13:58:03 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.711 13:58:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:39.711 [2024-07-15 13:58:03.621769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:16:39.711 [2024-07-15 13:58:03.623769] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.711 [2024-07-15 13:58:03.623816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:39.711 [2024-07-15 13:58:03.623842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:39.711 [2024-07-15 13:58:03.623870] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.711 [2024-07-15 13:58:03.623889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:39.711 [2024-07-15 13:58:03.623910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:39.711 [2024-07-15 13:58:03.623929] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.711 [2024-07-15 13:58:03.623943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:39.711 [2024-07-15 13:58:03.623963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:39.711 [2024-07-15 13:58:03.623978] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.711 [2024-07-15 13:58:03.623994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:39.711 [2024-07-15 13:58:03.624008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:39.711 13:58:03 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.711 13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:16:39.711 13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:39.711 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:16:39.711 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:39.711 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:39.711 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:39.711 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:39.711 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:39.711 13:58:04 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.711 13:58:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:39.711 13:58:04 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.711 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:16:39.711 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:39.969 [2024-07-15 13:58:04.321780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:16:39.969 [2024-07-15 13:58:04.323726] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.969 [2024-07-15 13:58:04.323785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:39.969 [2024-07-15 13:58:04.323808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:39.969 [2024-07-15 13:58:04.323837] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.969 [2024-07-15 13:58:04.323853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:39.969 [2024-07-15 13:58:04.323870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:39.969 [2024-07-15 13:58:04.323886] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.969 [2024-07-15 13:58:04.323905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:39.969 [2024-07-15 13:58:04.323920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:39.969 [2024-07-15 13:58:04.323938] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.969 [2024-07-15 13:58:04.323953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:39.969 [2024-07-15 13:58:04.323969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:40.227 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:16:40.227 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:40.227 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:40.227 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:40.227 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:40.227 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:40.227 13:58:04 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.227 13:58:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:40.227 13:58:04 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.227 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:40.227 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:40.485 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:40.485 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:40.485 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:40.485 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:40.485 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:40.485 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:40.485 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:40.485 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:40.742 13:58:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:40.742 13:58:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:40.742 13:58:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:52.995 13:58:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:52.995 13:58:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:52.995 13:58:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:52.995 13:58:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:52.995 13:58:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:52.995 13:58:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:52.995 13:58:17 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.995 13:58:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:52.995 13:58:17 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.995 13:58:17 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:52.995 13:58:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:52.995 13:58:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:52.995 13:58:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:52.995 13:58:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:52.995 13:58:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:52.995 13:58:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:52.995 13:58:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:52.995 13:58:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:52.995 13:58:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:52.995 13:58:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:52.995 13:58:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:52.995 13:58:17 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.995 13:58:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:52.995 [2024-07-15 13:58:17.221989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:16:52.996 [2024-07-15 13:58:17.224556] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.996 [2024-07-15 13:58:17.224713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:52.996 [2024-07-15 13:58:17.224865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.996 [2024-07-15 13:58:17.225018] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.996 [2024-07-15 13:58:17.225237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:52.996 [2024-07-15 13:58:17.225394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.996 [2024-07-15 13:58:17.225551] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.996 [2024-07-15 13:58:17.225812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:52.996 [2024-07-15 13:58:17.225957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.996 [2024-07-15 13:58:17.226094] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.996 [2024-07-15 13:58:17.226152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:52.996 [2024-07-15 13:58:17.226295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.996 13:58:17 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.996 13:58:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:16:52.996 13:58:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:53.254 13:58:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:16:53.254 13:58:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:53.254 13:58:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:53.254 13:58:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:53.254 13:58:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:53.254 13:58:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:53.254 13:58:17 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.254 13:58:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:53.254 13:58:17 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.513 13:58:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:16:53.513 13:58:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:53.513 [2024-07-15 13:58:17.822010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:16:53.513 [2024-07-15 13:58:17.827415] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:53.513 [2024-07-15 13:58:17.827593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.513 [2024-07-15 13:58:17.827754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.513 [2024-07-15 13:58:17.827917] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:53.513 [2024-07-15 13:58:17.828116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.513 [2024-07-15 13:58:17.828199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.513 [2024-07-15 13:58:17.828365] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:53.513 [2024-07-15 13:58:17.828573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.513 [2024-07-15 13:58:17.828747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.513 [2024-07-15 13:58:17.828912] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:53.513 [2024-07-15 13:58:17.829065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.513 [2024-07-15 13:58:17.829217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.772 13:58:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:16:53.772 13:58:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:53.772 13:58:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:53.772 13:58:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:53.772 13:58:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:53.772 13:58:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:53.772 13:58:18 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.772 13:58:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:54.065 13:58:18 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.065 13:58:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:54.065 13:58:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:54.065 13:58:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:54.065 13:58:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:54.065 13:58:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:54.065 13:58:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:54.065 13:58:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:54.065 13:58:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:54.065 13:58:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:54.065 13:58:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:54.323 13:58:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:54.323 13:58:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:54.323 13:58:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:06.516 13:58:30 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:06.517 13:58:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:06.517 13:58:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:06.517 13:58:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:06.517 13:58:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:06.517 13:58:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:06.517 13:58:30 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.517 13:58:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:06.517 13:58:30 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.517 13:58:30 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:06.517 13:58:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:06.517 13:58:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:06.517 13:58:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:06.517 13:58:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:06.517 13:58:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:06.517 13:58:30 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:06.517 13:58:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:06.517 13:58:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:06.517 13:58:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:06.517 13:58:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:06.517 13:58:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:06.517 13:58:30 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.517 13:58:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:06.517 [2024-07-15 13:58:30.822217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:17:06.517 [2024-07-15 13:58:30.825054] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:06.517 [2024-07-15 13:58:30.825105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.517 [2024-07-15 13:58:30.825131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.517 [2024-07-15 13:58:30.825159] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:06.517 [2024-07-15 13:58:30.825177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.517 [2024-07-15 13:58:30.825192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.517 [2024-07-15 13:58:30.825210] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:06.517 [2024-07-15 13:58:30.825224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.517 [2024-07-15 13:58:30.825245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.517 [2024-07-15 13:58:30.825260] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:06.517 [2024-07-15 13:58:30.825276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.517 [2024-07-15 13:58:30.825291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.517 13:58:30 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.517 13:58:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:17:06.517 13:58:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:07.083 [2024-07-15 13:58:31.322228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:17:07.083 [2024-07-15 13:58:31.324208] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:07.083 [2024-07-15 13:58:31.324282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:07.083 [2024-07-15 13:58:31.324305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:07.083 [2024-07-15 13:58:31.324350] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:07.083 [2024-07-15 13:58:31.324369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:07.083 [2024-07-15 13:58:31.324387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:07.083 [2024-07-15 13:58:31.324403] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:07.083 [2024-07-15 13:58:31.324426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:07.083 [2024-07-15 13:58:31.324441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:07.083 [2024-07-15 13:58:31.324457] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:07.083 [2024-07-15 13:58:31.324471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:07.083 [2024-07-15 13:58:31.324490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:07.083 13:58:31 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:17:07.083 13:58:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:07.083 13:58:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:07.083 13:58:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:07.083 13:58:31 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.083 13:58:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:07.083 13:58:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:07.083 13:58:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:07.083 13:58:31 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.083 13:58:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:07.083 13:58:31 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:07.083 13:58:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:07.083 13:58:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:07.083 13:58:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:07.083 13:58:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:07.341 13:58:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:07.341 13:58:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:07.341 13:58:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:07.341 13:58:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:07.342 13:58:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:07.342 13:58:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:07.342 13:58:31 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:19.563 13:58:43 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:19.563 13:58:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:19.563 13:58:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:19.563 13:58:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:19.563 13:58:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:19.563 13:58:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:19.563 13:58:43 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.563 13:58:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:19.563 13:58:43 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.563 13:58:43 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:19.563 13:58:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:19.563 13:58:43 sw_hotplug -- common/autotest_common.sh@715 -- # time=46.28 00:17:19.563 13:58:43 sw_hotplug -- common/autotest_common.sh@716 -- # echo 46.28 00:17:19.563 13:58:43 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:17:19.563 13:58:43 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=46.28 00:17:19.563 13:58:43 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 46.28 2 00:17:19.563 remove_attach_helper took 46.28s to complete (handling 2 nvme drive(s)) 13:58:43 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:17:19.563 13:58:43 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 74271 00:17:19.563 13:58:43 sw_hotplug -- common/autotest_common.sh@948 -- # '[' -z 74271 ']' 00:17:19.563 13:58:43 sw_hotplug -- common/autotest_common.sh@952 -- # kill -0 74271 00:17:19.563 13:58:43 sw_hotplug -- common/autotest_common.sh@953 -- # uname 00:17:19.563 13:58:43 sw_hotplug -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:19.563 13:58:43 sw_hotplug -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74271 00:17:19.563 killing process with pid 74271 00:17:19.563 13:58:43 sw_hotplug -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:19.563 13:58:43 sw_hotplug -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:19.563 13:58:43 sw_hotplug -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74271' 00:17:19.563 13:58:43 sw_hotplug -- common/autotest_common.sh@967 -- # kill 74271 00:17:19.563 13:58:43 sw_hotplug -- common/autotest_common.sh@972 -- # wait 74271 00:17:21.469 13:58:45 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:22.037 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:22.315 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:22.315 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:22.315 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:17:22.571 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:17:22.571 00:17:22.571 real 2m32.546s 00:17:22.571 user 1m52.908s 00:17:22.571 sys 0m19.790s 00:17:22.571 13:58:46 sw_hotplug -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:22.571 ************************************ 00:17:22.571 13:58:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:22.571 END TEST sw_hotplug 00:17:22.571 ************************************ 00:17:22.571 13:58:46 -- common/autotest_common.sh@1142 -- # return 0 00:17:22.571 13:58:46 -- spdk/autotest.sh@247 -- # [[ 1 -eq 1 ]] 00:17:22.571 13:58:46 -- spdk/autotest.sh@248 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:17:22.571 13:58:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:22.571 13:58:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:22.571 13:58:46 -- common/autotest_common.sh@10 -- # set +x 00:17:22.571 ************************************ 00:17:22.571 START TEST nvme_xnvme 00:17:22.571 ************************************ 00:17:22.571 13:58:46 nvme_xnvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:17:22.571 * Looking for test storage... 00:17:22.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:17:22.571 13:58:47 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:22.571 13:58:47 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.571 13:58:47 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.571 13:58:47 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.571 13:58:47 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.571 13:58:47 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.571 13:58:47 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.571 13:58:47 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:17:22.571 13:58:47 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.571 13:58:47 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:17:22.571 13:58:47 nvme_xnvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:22.571 13:58:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:22.571 13:58:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:22.571 ************************************ 00:17:22.571 START TEST xnvme_to_malloc_dd_copy 00:17:22.571 ************************************ 00:17:22.571 13:58:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1123 -- # malloc_to_xnvme_copy 00:17:22.571 13:58:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:17:22.571 13:58:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:17:22.571 13:58:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:17:22.571 13:58:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # return 00:17:22.571 13:58:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:17:22.571 13:58:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:17:22.571 13:58:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:17:22.571 13:58:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:17:22.571 13:58:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:17:22.571 13:58:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:17:22.571 13:58:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:17:22.571 13:58:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:17:22.571 13:58:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:17:22.571 13:58:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:17:22.571 13:58:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:17:22.571 13:58:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:17:22.571 13:58:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:17:22.571 13:58:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:17:22.572 13:58:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:17:22.572 13:58:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:17:22.572 13:58:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:17:22.572 13:58:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:17:22.572 13:58:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:17:22.572 13:58:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:17:22.828 { 00:17:22.828 "subsystems": [ 00:17:22.828 { 00:17:22.828 "subsystem": "bdev", 00:17:22.828 "config": [ 00:17:22.828 { 00:17:22.828 "params": { 00:17:22.828 "block_size": 512, 00:17:22.828 "num_blocks": 2097152, 00:17:22.828 "name": "malloc0" 00:17:22.828 }, 00:17:22.828 "method": "bdev_malloc_create" 00:17:22.828 }, 00:17:22.828 { 00:17:22.828 "params": { 00:17:22.828 "io_mechanism": "libaio", 00:17:22.828 "filename": "/dev/nullb0", 00:17:22.828 "name": "null0" 00:17:22.828 }, 00:17:22.828 "method": "bdev_xnvme_create" 00:17:22.828 }, 00:17:22.828 { 00:17:22.828 "method": "bdev_wait_for_examine" 00:17:22.828 } 00:17:22.828 ] 00:17:22.828 } 00:17:22.828 ] 00:17:22.828 } 00:17:22.828 [2024-07-15 13:58:47.185218] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:17:22.828 [2024-07-15 13:58:47.185410] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75628 ] 00:17:22.828 [2024-07-15 13:58:47.358747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.085 [2024-07-15 13:58:47.546140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.853  Copying: 168/1024 [MB] (168 MBps) Copying: 338/1024 [MB] (169 MBps) Copying: 507/1024 [MB] (168 MBps) Copying: 671/1024 [MB] (164 MBps) Copying: 839/1024 [MB] (167 MBps) Copying: 1010/1024 [MB] (171 MBps) Copying: 1024/1024 [MB] (average 168 MBps) 00:17:33.853 00:17:33.853 13:58:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:17:33.853 13:58:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:17:33.853 13:58:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:17:33.853 13:58:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:17:34.111 { 00:17:34.111 "subsystems": [ 00:17:34.111 { 00:17:34.111 "subsystem": "bdev", 00:17:34.111 "config": [ 00:17:34.111 { 00:17:34.111 "params": { 00:17:34.111 "block_size": 512, 00:17:34.111 "num_blocks": 2097152, 00:17:34.111 "name": "malloc0" 00:17:34.111 }, 00:17:34.111 "method": "bdev_malloc_create" 00:17:34.111 }, 00:17:34.111 { 00:17:34.111 "params": { 00:17:34.111 "io_mechanism": "libaio", 00:17:34.111 "filename": "/dev/nullb0", 00:17:34.111 "name": "null0" 00:17:34.111 }, 00:17:34.111 "method": "bdev_xnvme_create" 00:17:34.111 }, 00:17:34.111 { 00:17:34.111 "method": "bdev_wait_for_examine" 00:17:34.111 } 00:17:34.111 ] 00:17:34.111 } 00:17:34.111 ] 00:17:34.111 } 00:17:34.111 [2024-07-15 13:58:58.447316] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:17:34.111 [2024-07-15 13:58:58.447492] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75754 ] 00:17:34.111 [2024-07-15 13:58:58.619950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.369 [2024-07-15 13:58:58.846888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.852  Copying: 175/1024 [MB] (175 MBps) Copying: 352/1024 [MB] (176 MBps) Copying: 529/1024 [MB] (177 MBps) Copying: 707/1024 [MB] (177 MBps) Copying: 884/1024 [MB] (177 MBps) Copying: 1024/1024 [MB] (average 176 MBps) 00:17:44.853 00:17:44.853 13:59:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:17:44.853 13:59:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:17:44.853 13:59:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:17:44.853 13:59:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:17:44.853 13:59:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:17:44.853 13:59:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:17:45.112 { 00:17:45.112 "subsystems": [ 00:17:45.112 { 00:17:45.112 "subsystem": "bdev", 00:17:45.112 "config": [ 00:17:45.112 { 00:17:45.112 "params": { 00:17:45.112 "block_size": 512, 00:17:45.112 "num_blocks": 2097152, 00:17:45.112 "name": "malloc0" 00:17:45.112 }, 00:17:45.112 "method": "bdev_malloc_create" 00:17:45.112 }, 00:17:45.112 { 00:17:45.112 "params": { 00:17:45.112 "io_mechanism": "io_uring", 00:17:45.112 "filename": "/dev/nullb0", 00:17:45.112 "name": "null0" 00:17:45.112 }, 00:17:45.112 "method": "bdev_xnvme_create" 00:17:45.112 }, 00:17:45.112 { 00:17:45.112 "method": "bdev_wait_for_examine" 00:17:45.112 } 00:17:45.112 ] 00:17:45.112 } 00:17:45.112 ] 00:17:45.112 } 00:17:45.112 [2024-07-15 13:59:09.510893] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:17:45.112 [2024-07-15 13:59:09.511091] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75875 ] 00:17:45.370 [2024-07-15 13:59:09.680381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.370 [2024-07-15 13:59:09.894134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.758  Copying: 175/1024 [MB] (175 MBps) Copying: 351/1024 [MB] (176 MBps) Copying: 527/1024 [MB] (175 MBps) Copying: 702/1024 [MB] (175 MBps) Copying: 878/1024 [MB] (175 MBps) Copying: 1024/1024 [MB] (average 175 MBps) 00:17:56.758 00:17:56.758 13:59:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:17:56.758 13:59:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:17:56.758 13:59:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:17:56.758 13:59:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:17:56.758 { 00:17:56.758 "subsystems": [ 00:17:56.758 { 00:17:56.758 "subsystem": "bdev", 00:17:56.758 "config": [ 00:17:56.758 { 00:17:56.758 "params": { 00:17:56.758 "block_size": 512, 00:17:56.758 "num_blocks": 2097152, 00:17:56.758 "name": "malloc0" 00:17:56.758 }, 00:17:56.758 "method": "bdev_malloc_create" 00:17:56.758 }, 00:17:56.758 { 00:17:56.759 "params": { 00:17:56.759 "io_mechanism": "io_uring", 00:17:56.759 "filename": "/dev/nullb0", 00:17:56.759 "name": "null0" 00:17:56.759 }, 00:17:56.759 "method": "bdev_xnvme_create" 00:17:56.759 }, 00:17:56.759 { 00:17:56.759 "method": "bdev_wait_for_examine" 00:17:56.759 } 00:17:56.759 ] 00:17:56.759 } 00:17:56.759 ] 00:17:56.759 } 00:17:56.759 [2024-07-15 13:59:20.587544] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:17:56.759 [2024-07-15 13:59:20.587721] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76004 ] 00:17:56.759 [2024-07-15 13:59:20.759601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.759 [2024-07-15 13:59:20.950652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.695  Copying: 187/1024 [MB] (187 MBps) Copying: 374/1024 [MB] (187 MBps) Copying: 562/1024 [MB] (187 MBps) Copying: 749/1024 [MB] (187 MBps) Copying: 934/1024 [MB] (185 MBps) Copying: 1024/1024 [MB] (average 186 MBps) 00:18:06.695 00:18:06.695 13:59:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:18:06.695 13:59:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@195 -- # modprobe -r null_blk 00:18:06.695 00:18:06.695 real 0m44.131s 00:18:06.695 user 0m38.827s 00:18:06.695 sys 0m4.729s 00:18:06.695 13:59:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:06.695 ************************************ 00:18:06.695 END TEST xnvme_to_malloc_dd_copy 00:18:06.695 ************************************ 00:18:06.695 13:59:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:18:06.695 13:59:31 nvme_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:18:06.696 13:59:31 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:06.696 13:59:31 nvme_xnvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:06.696 13:59:31 nvme_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:06.696 13:59:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:06.953 ************************************ 00:18:06.953 START TEST xnvme_bdevperf 00:18:06.953 ************************************ 00:18:06.953 13:59:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1123 -- # xnvme_bdevperf 00:18:06.953 13:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:18:06.953 13:59:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:18:06.953 13:59:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:18:06.953 13:59:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # return 00:18:06.953 13:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:18:06.953 13:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:18:06.953 13:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:18:06.953 13:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:18:06.953 13:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:18:06.953 13:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:18:06.953 13:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:18:06.953 13:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:18:06.953 13:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:18:06.954 13:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:18:06.954 13:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:18:06.954 13:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:18:06.954 13:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:18:06.954 13:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:18:06.954 13:59:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:06.954 13:59:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:06.954 { 00:18:06.954 "subsystems": [ 00:18:06.954 { 00:18:06.954 "subsystem": "bdev", 00:18:06.954 "config": [ 00:18:06.954 { 00:18:06.954 "params": { 00:18:06.954 "io_mechanism": "libaio", 00:18:06.954 "filename": "/dev/nullb0", 00:18:06.954 "name": "null0" 00:18:06.954 }, 00:18:06.954 "method": "bdev_xnvme_create" 00:18:06.954 }, 00:18:06.954 { 00:18:06.954 "method": "bdev_wait_for_examine" 00:18:06.954 } 00:18:06.954 ] 00:18:06.954 } 00:18:06.954 ] 00:18:06.954 } 00:18:06.954 [2024-07-15 13:59:31.370581] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:18:06.954 [2024-07-15 13:59:31.370774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76142 ] 00:18:07.211 [2024-07-15 13:59:31.543345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.211 [2024-07-15 13:59:31.731970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.777 Running I/O for 5 seconds... 00:18:13.037 00:18:13.037 Latency(us) 00:18:13.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.037 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:13.037 null0 : 5.00 111813.38 436.77 0.00 0.00 568.72 182.46 1444.77 00:18:13.037 =================================================================================================================== 00:18:13.037 Total : 111813.38 436.77 0.00 0.00 568.72 182.46 1444.77 00:18:13.971 13:59:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:18:13.971 13:59:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:18:13.971 13:59:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:18:13.971 13:59:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:18:13.971 13:59:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:13.971 13:59:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:13.971 { 00:18:13.971 "subsystems": [ 00:18:13.971 { 00:18:13.971 "subsystem": "bdev", 00:18:13.971 "config": [ 00:18:13.971 { 00:18:13.971 "params": { 00:18:13.971 "io_mechanism": "io_uring", 00:18:13.971 "filename": "/dev/nullb0", 00:18:13.971 "name": "null0" 00:18:13.971 }, 00:18:13.971 "method": "bdev_xnvme_create" 00:18:13.971 }, 00:18:13.971 { 00:18:13.971 "method": "bdev_wait_for_examine" 00:18:13.971 } 00:18:13.971 ] 00:18:13.971 } 00:18:13.971 ] 00:18:13.971 } 00:18:13.971 [2024-07-15 13:59:38.281981] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:18:13.971 [2024-07-15 13:59:38.282146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76224 ] 00:18:13.971 [2024-07-15 13:59:38.448858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.229 [2024-07-15 13:59:38.684006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.487 Running I/O for 5 seconds... 00:18:19.749 00:18:19.749 Latency(us) 00:18:19.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.749 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:19.749 null0 : 5.00 149262.49 583.06 0.00 0.00 425.41 260.65 1355.40 00:18:19.749 =================================================================================================================== 00:18:19.749 Total : 149262.49 583.06 0.00 0.00 425.41 260.65 1355.40 00:18:20.680 13:59:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:18:20.680 13:59:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@195 -- # modprobe -r null_blk 00:18:20.680 00:18:20.680 real 0m13.876s 00:18:20.680 user 0m10.917s 00:18:20.680 sys 0m2.738s 00:18:20.680 13:59:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:20.680 13:59:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:20.680 ************************************ 00:18:20.681 END TEST xnvme_bdevperf 00:18:20.681 ************************************ 00:18:20.681 13:59:45 nvme_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:18:20.681 00:18:20.681 real 0m58.189s 00:18:20.681 user 0m49.804s 00:18:20.681 sys 0m7.584s 00:18:20.681 13:59:45 nvme_xnvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:20.681 13:59:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:20.681 ************************************ 00:18:20.681 END TEST nvme_xnvme 00:18:20.681 ************************************ 00:18:20.681 13:59:45 -- common/autotest_common.sh@1142 -- # return 0 00:18:20.681 13:59:45 -- spdk/autotest.sh@249 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:18:20.681 13:59:45 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:20.681 13:59:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:20.681 13:59:45 -- common/autotest_common.sh@10 -- # set +x 00:18:20.681 ************************************ 00:18:20.681 START TEST blockdev_xnvme 00:18:20.681 ************************************ 00:18:20.681 13:59:45 blockdev_xnvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:18:20.939 * Looking for test storage... 00:18:20.939 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:20.939 13:59:45 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:20.939 13:59:45 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:18:20.939 13:59:45 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:20.939 13:59:45 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:20.939 13:59:45 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:20.940 13:59:45 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:20.940 13:59:45 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:20.940 13:59:45 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:20.940 13:59:45 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:18:20.940 13:59:45 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:18:20.940 13:59:45 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:18:20.940 13:59:45 blockdev_xnvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:18:20.940 13:59:45 blockdev_xnvme -- bdev/blockdev.sh@674 -- # uname -s 00:18:20.940 13:59:45 blockdev_xnvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:18:20.940 13:59:45 blockdev_xnvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:18:20.940 13:59:45 blockdev_xnvme -- bdev/blockdev.sh@682 -- # test_type=xnvme 00:18:20.940 13:59:45 blockdev_xnvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:18:20.940 13:59:45 blockdev_xnvme -- bdev/blockdev.sh@684 -- # dek= 00:18:20.940 13:59:45 blockdev_xnvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:18:20.940 13:59:45 blockdev_xnvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:18:20.940 13:59:45 blockdev_xnvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:18:20.940 13:59:45 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == bdev ]] 00:18:20.940 13:59:45 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == crypto_* ]] 00:18:20.940 13:59:45 blockdev_xnvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:18:20.940 13:59:45 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=76366 00:18:20.940 13:59:45 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:20.940 13:59:45 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:20.940 13:59:45 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 76366 00:18:20.940 13:59:45 blockdev_xnvme -- common/autotest_common.sh@829 -- # '[' -z 76366 ']' 00:18:20.940 13:59:45 blockdev_xnvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.940 13:59:45 blockdev_xnvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:20.940 13:59:45 blockdev_xnvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.940 13:59:45 blockdev_xnvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:20.940 13:59:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:20.940 [2024-07-15 13:59:45.416605] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:18:20.940 [2024-07-15 13:59:45.417344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76366 ] 00:18:21.198 [2024-07-15 13:59:45.580977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.456 [2024-07-15 13:59:45.770829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.023 13:59:46 blockdev_xnvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:22.023 13:59:46 blockdev_xnvme -- common/autotest_common.sh@862 -- # return 0 00:18:22.023 13:59:46 blockdev_xnvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:18:22.023 13:59:46 blockdev_xnvme -- bdev/blockdev.sh@729 -- # setup_xnvme_conf 00:18:22.023 13:59:46 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:18:22.023 13:59:46 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:18:22.023 13:59:46 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:22.281 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:22.538 Waiting for block devices as requested 00:18:22.538 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:22.795 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:22.795 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:18:22.795 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:18:28.062 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:18:28.062 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:18:28.062 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:18:28.062 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:18:28.062 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1670 -- # local nvme bdf 00:18:28.062 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:18:28.062 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:18:28.062 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:18:28.062 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:28.062 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:28.062 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:18:28.062 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:18:28.063 nvme0n1 00:18:28.063 nvme1n1 00:18:28.063 nvme2n1 00:18:28.063 nvme2n2 00:18:28.063 nvme2n3 00:18:28.063 nvme3n1 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@740 -- # cat 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:18:28.063 13:59:52 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:18:28.063 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "d519e6df-ed05-42b3-9ce6-29ce5823f8d4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d519e6df-ed05-42b3-9ce6-29ce5823f8d4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "060d08e0-9784-4ad0-a4dc-44d63ef97a37"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "060d08e0-9784-4ad0-a4dc-44d63ef97a37",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "11a3092a-23a6-4eee-be96-5644a9547475"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "11a3092a-23a6-4eee-be96-5644a9547475",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "baecb7bf-3910-4371-9926-ac5bf2b8d231"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "baecb7bf-3910-4371-9926-ac5bf2b8d231",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "a196f7f8-f25e-4217-854f-04bcd3a4683b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a196f7f8-f25e-4217-854f-04bcd3a4683b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "2edc6e16-fb75-4771-a8d5-808f187fff7f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2edc6e16-fb75-4771-a8d5-808f187fff7f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:18:28.322 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:18:28.322 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=nvme0n1 00:18:28.322 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:18:28.322 13:59:52 blockdev_xnvme -- bdev/blockdev.sh@754 -- # killprocess 76366 00:18:28.322 13:59:52 blockdev_xnvme -- common/autotest_common.sh@948 -- # '[' -z 76366 ']' 00:18:28.322 13:59:52 blockdev_xnvme -- common/autotest_common.sh@952 -- # kill -0 76366 00:18:28.322 13:59:52 blockdev_xnvme -- common/autotest_common.sh@953 -- # uname 00:18:28.322 13:59:52 blockdev_xnvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:28.322 13:59:52 blockdev_xnvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76366 00:18:28.322 13:59:52 blockdev_xnvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:28.322 13:59:52 blockdev_xnvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:28.322 13:59:52 blockdev_xnvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76366' 00:18:28.322 killing process with pid 76366 00:18:28.322 13:59:52 blockdev_xnvme -- common/autotest_common.sh@967 -- # kill 76366 00:18:28.322 13:59:52 blockdev_xnvme -- common/autotest_common.sh@972 -- # wait 76366 00:18:30.220 13:59:54 blockdev_xnvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:30.220 13:59:54 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:18:30.220 13:59:54 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:18:30.220 13:59:54 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:30.220 13:59:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:30.220 ************************************ 00:18:30.220 START TEST bdev_hello_world 00:18:30.220 ************************************ 00:18:30.220 13:59:54 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:18:30.476 [2024-07-15 13:59:54.854612] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:18:30.476 [2024-07-15 13:59:54.854772] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76731 ] 00:18:30.733 [2024-07-15 13:59:55.026523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.733 [2024-07-15 13:59:55.253503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.298 [2024-07-15 13:59:55.639994] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:31.298 [2024-07-15 13:59:55.640060] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:18:31.298 [2024-07-15 13:59:55.640088] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:31.298 [2024-07-15 13:59:55.642341] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:31.298 [2024-07-15 13:59:55.642745] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:31.298 [2024-07-15 13:59:55.642785] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:31.298 [2024-07-15 13:59:55.643055] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:31.298 00:18:31.298 [2024-07-15 13:59:55.643099] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:32.672 00:18:32.672 real 0m2.038s 00:18:32.672 user 0m1.699s 00:18:32.672 sys 0m0.222s 00:18:32.672 13:59:56 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:32.672 13:59:56 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:32.672 ************************************ 00:18:32.672 END TEST bdev_hello_world 00:18:32.672 ************************************ 00:18:32.672 13:59:56 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:18:32.672 13:59:56 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:18:32.672 13:59:56 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:32.672 13:59:56 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:32.672 13:59:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:32.672 ************************************ 00:18:32.672 START TEST bdev_bounds 00:18:32.672 ************************************ 00:18:32.672 13:59:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:18:32.672 13:59:56 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=76772 00:18:32.672 13:59:56 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:32.672 13:59:56 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 76772' 00:18:32.672 Process bdevio pid: 76772 00:18:32.672 13:59:56 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 76772 00:18:32.672 13:59:56 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:32.672 13:59:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 76772 ']' 00:18:32.672 13:59:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.672 13:59:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:32.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.672 13:59:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.672 13:59:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:32.672 13:59:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:32.672 [2024-07-15 13:59:56.975283] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:18:32.672 [2024-07-15 13:59:56.975485] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76772 ] 00:18:32.672 [2024-07-15 13:59:57.149126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:32.930 [2024-07-15 13:59:57.341686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.930 [2024-07-15 13:59:57.341755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:32.930 [2024-07-15 13:59:57.341755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.496 13:59:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:33.496 13:59:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:18:33.496 13:59:57 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:33.753 I/O targets: 00:18:33.753 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:18:33.753 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:18:33.753 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:33.753 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:33.753 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:33.753 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:18:33.753 00:18:33.753 00:18:33.753 CUnit - A unit testing framework for C - Version 2.1-3 00:18:33.753 http://cunit.sourceforge.net/ 00:18:33.753 00:18:33.753 00:18:33.753 Suite: bdevio tests on: nvme3n1 00:18:33.753 Test: blockdev write read block ...passed 00:18:33.753 Test: blockdev write zeroes read block ...passed 00:18:33.753 Test: blockdev write zeroes read no split ...passed 00:18:33.753 Test: blockdev write zeroes read split ...passed 00:18:33.753 Test: blockdev write zeroes read split partial ...passed 00:18:33.753 Test: blockdev reset ...passed 00:18:33.753 Test: blockdev write read 8 blocks ...passed 00:18:33.753 Test: blockdev write read size > 128k ...passed 00:18:33.753 Test: blockdev write read invalid size ...passed 00:18:33.753 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:33.753 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:33.753 Test: blockdev write read max offset ...passed 00:18:33.753 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:33.753 Test: blockdev writev readv 8 blocks ...passed 00:18:33.753 Test: blockdev writev readv 30 x 1block ...passed 00:18:33.753 Test: blockdev writev readv block ...passed 00:18:33.753 Test: blockdev writev readv size > 128k ...passed 00:18:33.753 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:33.753 Test: blockdev comparev and writev ...passed 00:18:33.753 Test: blockdev nvme passthru rw ...passed 00:18:33.754 Test: blockdev nvme passthru vendor specific ...passed 00:18:33.754 Test: blockdev nvme admin passthru ...passed 00:18:33.754 Test: blockdev copy ...passed 00:18:33.754 Suite: bdevio tests on: nvme2n3 00:18:33.754 Test: blockdev write read block ...passed 00:18:33.754 Test: blockdev write zeroes read block ...passed 00:18:33.754 Test: blockdev write zeroes read no split ...passed 00:18:33.754 Test: blockdev write zeroes read split ...passed 00:18:33.754 Test: blockdev write zeroes read split partial ...passed 00:18:33.754 Test: blockdev reset ...passed 00:18:33.754 Test: blockdev write read 8 blocks ...passed 00:18:33.754 Test: blockdev write read size > 128k ...passed 00:18:33.754 Test: blockdev write read invalid size ...passed 00:18:33.754 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:33.754 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:33.754 Test: blockdev write read max offset ...passed 00:18:33.754 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:33.754 Test: blockdev writev readv 8 blocks ...passed 00:18:33.754 Test: blockdev writev readv 30 x 1block ...passed 00:18:33.754 Test: blockdev writev readv block ...passed 00:18:33.754 Test: blockdev writev readv size > 128k ...passed 00:18:33.754 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:33.754 Test: blockdev comparev and writev ...passed 00:18:33.754 Test: blockdev nvme passthru rw ...passed 00:18:33.754 Test: blockdev nvme passthru vendor specific ...passed 00:18:33.754 Test: blockdev nvme admin passthru ...passed 00:18:33.754 Test: blockdev copy ...passed 00:18:33.754 Suite: bdevio tests on: nvme2n2 00:18:33.754 Test: blockdev write read block ...passed 00:18:33.754 Test: blockdev write zeroes read block ...passed 00:18:33.754 Test: blockdev write zeroes read no split ...passed 00:18:33.754 Test: blockdev write zeroes read split ...passed 00:18:34.013 Test: blockdev write zeroes read split partial ...passed 00:18:34.013 Test: blockdev reset ...passed 00:18:34.013 Test: blockdev write read 8 blocks ...passed 00:18:34.013 Test: blockdev write read size > 128k ...passed 00:18:34.013 Test: blockdev write read invalid size ...passed 00:18:34.013 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:34.013 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:34.013 Test: blockdev write read max offset ...passed 00:18:34.013 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:34.013 Test: blockdev writev readv 8 blocks ...passed 00:18:34.013 Test: blockdev writev readv 30 x 1block ...passed 00:18:34.013 Test: blockdev writev readv block ...passed 00:18:34.013 Test: blockdev writev readv size > 128k ...passed 00:18:34.013 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:34.013 Test: blockdev comparev and writev ...passed 00:18:34.013 Test: blockdev nvme passthru rw ...passed 00:18:34.013 Test: blockdev nvme passthru vendor specific ...passed 00:18:34.013 Test: blockdev nvme admin passthru ...passed 00:18:34.013 Test: blockdev copy ...passed 00:18:34.013 Suite: bdevio tests on: nvme2n1 00:18:34.013 Test: blockdev write read block ...passed 00:18:34.013 Test: blockdev write zeroes read block ...passed 00:18:34.013 Test: blockdev write zeroes read no split ...passed 00:18:34.013 Test: blockdev write zeroes read split ...passed 00:18:34.013 Test: blockdev write zeroes read split partial ...passed 00:18:34.013 Test: blockdev reset ...passed 00:18:34.013 Test: blockdev write read 8 blocks ...passed 00:18:34.013 Test: blockdev write read size > 128k ...passed 00:18:34.013 Test: blockdev write read invalid size ...passed 00:18:34.013 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:34.013 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:34.013 Test: blockdev write read max offset ...passed 00:18:34.013 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:34.013 Test: blockdev writev readv 8 blocks ...passed 00:18:34.013 Test: blockdev writev readv 30 x 1block ...passed 00:18:34.013 Test: blockdev writev readv block ...passed 00:18:34.013 Test: blockdev writev readv size > 128k ...passed 00:18:34.013 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:34.013 Test: blockdev comparev and writev ...passed 00:18:34.013 Test: blockdev nvme passthru rw ...passed 00:18:34.013 Test: blockdev nvme passthru vendor specific ...passed 00:18:34.013 Test: blockdev nvme admin passthru ...passed 00:18:34.013 Test: blockdev copy ...passed 00:18:34.013 Suite: bdevio tests on: nvme1n1 00:18:34.013 Test: blockdev write read block ...passed 00:18:34.013 Test: blockdev write zeroes read block ...passed 00:18:34.013 Test: blockdev write zeroes read no split ...passed 00:18:34.013 Test: blockdev write zeroes read split ...passed 00:18:34.013 Test: blockdev write zeroes read split partial ...passed 00:18:34.013 Test: blockdev reset ...passed 00:18:34.013 Test: blockdev write read 8 blocks ...passed 00:18:34.013 Test: blockdev write read size > 128k ...passed 00:18:34.013 Test: blockdev write read invalid size ...passed 00:18:34.013 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:34.013 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:34.013 Test: blockdev write read max offset ...passed 00:18:34.013 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:34.013 Test: blockdev writev readv 8 blocks ...passed 00:18:34.013 Test: blockdev writev readv 30 x 1block ...passed 00:18:34.013 Test: blockdev writev readv block ...passed 00:18:34.013 Test: blockdev writev readv size > 128k ...passed 00:18:34.013 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:34.013 Test: blockdev comparev and writev ...passed 00:18:34.013 Test: blockdev nvme passthru rw ...passed 00:18:34.013 Test: blockdev nvme passthru vendor specific ...passed 00:18:34.013 Test: blockdev nvme admin passthru ...passed 00:18:34.013 Test: blockdev copy ...passed 00:18:34.014 Suite: bdevio tests on: nvme0n1 00:18:34.014 Test: blockdev write read block ...passed 00:18:34.014 Test: blockdev write zeroes read block ...passed 00:18:34.014 Test: blockdev write zeroes read no split ...passed 00:18:34.014 Test: blockdev write zeroes read split ...passed 00:18:34.014 Test: blockdev write zeroes read split partial ...passed 00:18:34.014 Test: blockdev reset ...passed 00:18:34.014 Test: blockdev write read 8 blocks ...passed 00:18:34.014 Test: blockdev write read size > 128k ...passed 00:18:34.014 Test: blockdev write read invalid size ...passed 00:18:34.014 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:34.014 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:34.014 Test: blockdev write read max offset ...passed 00:18:34.014 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:34.014 Test: blockdev writev readv 8 blocks ...passed 00:18:34.014 Test: blockdev writev readv 30 x 1block ...passed 00:18:34.014 Test: blockdev writev readv block ...passed 00:18:34.014 Test: blockdev writev readv size > 128k ...passed 00:18:34.014 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:34.014 Test: blockdev comparev and writev ...passed 00:18:34.014 Test: blockdev nvme passthru rw ...passed 00:18:34.014 Test: blockdev nvme passthru vendor specific ...passed 00:18:34.014 Test: blockdev nvme admin passthru ...passed 00:18:34.014 Test: blockdev copy ...passed 00:18:34.014 00:18:34.014 Run Summary: Type Total Ran Passed Failed Inactive 00:18:34.014 suites 6 6 n/a 0 0 00:18:34.014 tests 138 138 138 0 0 00:18:34.014 asserts 780 780 780 0 n/a 00:18:34.014 00:18:34.014 Elapsed time = 1.254 seconds 00:18:34.014 0 00:18:34.014 13:59:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 76772 00:18:34.014 13:59:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 76772 ']' 00:18:34.014 13:59:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 76772 00:18:34.014 13:59:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:18:34.272 13:59:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:34.272 13:59:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76772 00:18:34.272 13:59:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:34.272 13:59:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:34.272 killing process with pid 76772 00:18:34.272 13:59:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76772' 00:18:34.272 13:59:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 76772 00:18:34.272 13:59:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 76772 00:18:35.645 13:59:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:18:35.645 00:18:35.645 real 0m2.945s 00:18:35.645 user 0m7.072s 00:18:35.645 sys 0m0.372s 00:18:35.645 13:59:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:35.645 13:59:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:35.645 ************************************ 00:18:35.645 END TEST bdev_bounds 00:18:35.645 ************************************ 00:18:35.645 13:59:59 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:18:35.645 13:59:59 blockdev_xnvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:18:35.645 13:59:59 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:35.645 13:59:59 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:35.645 13:59:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:35.645 ************************************ 00:18:35.645 START TEST bdev_nbd 00:18:35.645 ************************************ 00:18:35.645 13:59:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:18:35.645 13:59:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:18:35.645 13:59:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:18:35.645 13:59:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:35.645 13:59:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:35.645 13:59:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:18:35.645 13:59:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:18:35.645 13:59:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:18:35.645 13:59:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:18:35.645 13:59:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:18:35.645 13:59:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:18:35.645 13:59:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:18:35.645 13:59:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:35.645 13:59:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:18:35.645 13:59:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:18:35.645 13:59:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:18:35.645 13:59:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=76835 00:18:35.645 13:59:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:18:35.645 13:59:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:35.645 13:59:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 76835 /var/tmp/spdk-nbd.sock 00:18:35.645 13:59:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 76835 ']' 00:18:35.645 13:59:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:35.645 13:59:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:35.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:35.645 13:59:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:35.645 13:59:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:35.645 13:59:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:35.645 [2024-07-15 13:59:59.934870] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:18:35.645 [2024-07-15 13:59:59.935001] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:35.645 [2024-07-15 14:00:00.102651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.901 [2024-07-15 14:00:00.323042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.464 14:00:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:36.464 14:00:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:18:36.464 14:00:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:18:36.464 14:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:36.464 14:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:18:36.464 14:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:18:36.464 14:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:18:36.464 14:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:36.464 14:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:18:36.464 14:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:18:36.464 14:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:18:36.464 14:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:18:36.464 14:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:18:36.464 14:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:36.464 14:00:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:18:36.722 14:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:18:36.722 14:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:18:36.722 14:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:18:36.722 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:18:36.722 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:18:36.722 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:36.722 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:36.722 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:18:36.722 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:18:36.722 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:36.722 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:36.722 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:36.722 1+0 records in 00:18:36.722 1+0 records out 00:18:36.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000501568 s, 8.2 MB/s 00:18:36.722 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.722 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:18:36.722 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.722 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:36.722 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:18:36.722 14:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:36.722 14:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:36.722 14:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:18:37.005 14:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:18:37.005 14:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:18:37.005 14:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:18:37.005 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:18:37.005 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:18:37.005 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:37.005 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:37.005 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:18:37.005 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:18:37.005 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:37.005 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:37.005 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:37.005 1+0 records in 00:18:37.005 1+0 records out 00:18:37.005 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000649942 s, 6.3 MB/s 00:18:37.005 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:37.005 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:18:37.005 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:37.005 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:37.005 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:18:37.005 14:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:37.005 14:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:37.005 14:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:18:37.568 14:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:18:37.568 14:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:18:37.568 14:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:18:37.568 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:18:37.568 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:18:37.568 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:37.568 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:37.568 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:18:37.568 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:18:37.568 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:37.568 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:37.568 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:37.568 1+0 records in 00:18:37.568 1+0 records out 00:18:37.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570464 s, 7.2 MB/s 00:18:37.568 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:37.568 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:18:37.568 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:37.568 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:37.568 14:00:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:18:37.568 14:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:37.568 14:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:37.568 14:00:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:18:37.823 14:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:18:37.823 14:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:18:37.823 14:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:18:37.823 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:18:37.823 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:18:37.823 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:37.823 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:37.823 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:18:37.823 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:18:37.823 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:37.823 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:37.823 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:37.823 1+0 records in 00:18:37.823 1+0 records out 00:18:37.823 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000625485 s, 6.5 MB/s 00:18:37.823 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:37.823 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:18:37.823 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:37.823 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:37.823 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:18:37.823 14:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:37.823 14:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:37.823 14:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:18:38.080 14:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:18:38.080 14:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:18:38.080 14:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:18:38.080 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:18:38.080 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:18:38.080 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:38.080 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:38.080 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:18:38.080 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:18:38.080 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:38.080 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:38.080 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:38.080 1+0 records in 00:18:38.080 1+0 records out 00:18:38.080 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000607855 s, 6.7 MB/s 00:18:38.080 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:38.080 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:18:38.080 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:38.081 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:38.081 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:18:38.081 14:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:38.081 14:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:38.081 14:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:18:38.338 14:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:18:38.338 14:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:18:38.338 14:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:18:38.338 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:18:38.338 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:18:38.338 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:38.338 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:38.338 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:18:38.338 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:18:38.338 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:38.338 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:38.338 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:38.338 1+0 records in 00:18:38.338 1+0 records out 00:18:38.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000731159 s, 5.6 MB/s 00:18:38.338 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:38.338 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:18:38.338 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:38.338 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:38.338 14:00:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:18:38.338 14:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:38.338 14:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:38.338 14:00:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:38.595 14:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:18:38.595 { 00:18:38.595 "nbd_device": "/dev/nbd0", 00:18:38.595 "bdev_name": "nvme0n1" 00:18:38.595 }, 00:18:38.595 { 00:18:38.595 "nbd_device": "/dev/nbd1", 00:18:38.595 "bdev_name": "nvme1n1" 00:18:38.595 }, 00:18:38.595 { 00:18:38.595 "nbd_device": "/dev/nbd2", 00:18:38.595 "bdev_name": "nvme2n1" 00:18:38.595 }, 00:18:38.595 { 00:18:38.595 "nbd_device": "/dev/nbd3", 00:18:38.595 "bdev_name": "nvme2n2" 00:18:38.595 }, 00:18:38.595 { 00:18:38.595 "nbd_device": "/dev/nbd4", 00:18:38.595 "bdev_name": "nvme2n3" 00:18:38.595 }, 00:18:38.595 { 00:18:38.595 "nbd_device": "/dev/nbd5", 00:18:38.595 "bdev_name": "nvme3n1" 00:18:38.595 } 00:18:38.595 ]' 00:18:38.595 14:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:18:38.595 14:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:18:38.595 { 00:18:38.595 "nbd_device": "/dev/nbd0", 00:18:38.595 "bdev_name": "nvme0n1" 00:18:38.595 }, 00:18:38.595 { 00:18:38.595 "nbd_device": "/dev/nbd1", 00:18:38.595 "bdev_name": "nvme1n1" 00:18:38.595 }, 00:18:38.595 { 00:18:38.595 "nbd_device": "/dev/nbd2", 00:18:38.595 "bdev_name": "nvme2n1" 00:18:38.595 }, 00:18:38.595 { 00:18:38.595 "nbd_device": "/dev/nbd3", 00:18:38.595 "bdev_name": "nvme2n2" 00:18:38.595 }, 00:18:38.595 { 00:18:38.596 "nbd_device": "/dev/nbd4", 00:18:38.596 "bdev_name": "nvme2n3" 00:18:38.596 }, 00:18:38.596 { 00:18:38.596 "nbd_device": "/dev/nbd5", 00:18:38.596 "bdev_name": "nvme3n1" 00:18:38.596 } 00:18:38.596 ]' 00:18:38.596 14:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:18:38.871 14:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:18:38.871 14:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:38.871 14:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:18:38.871 14:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:38.871 14:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:38.871 14:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:38.871 14:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:39.127 14:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:39.127 14:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:39.127 14:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:39.127 14:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:39.127 14:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:39.127 14:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:39.127 14:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:39.127 14:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:39.127 14:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:39.127 14:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:39.384 14:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:39.384 14:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:39.384 14:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:39.384 14:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:39.384 14:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:39.384 14:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:39.384 14:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:39.384 14:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:39.384 14:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:39.384 14:00:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:18:39.641 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:18:39.641 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:18:39.641 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:18:39.641 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:39.641 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:39.641 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:18:39.641 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:39.641 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:39.641 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:39.641 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:18:39.897 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:18:39.897 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:18:39.897 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:18:39.897 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:39.897 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:39.897 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:18:39.897 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:39.897 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:39.897 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:39.897 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:18:40.154 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:18:40.154 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:18:40.154 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:18:40.154 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:40.154 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:40.154 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:18:40.154 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:40.154 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:40.154 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:40.154 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:18:40.719 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:18:40.719 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:18:40.719 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:18:40.719 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:40.719 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:40.719 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:18:40.719 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:40.719 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:40.719 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:40.719 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:40.719 14:00:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:40.719 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:40.719 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:40.719 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:40.977 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:40.977 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:40.977 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:40.977 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:40.977 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:40.977 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:40.977 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:18:40.977 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:18:40.977 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:18:40.977 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:18:40.978 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:40.978 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:18:40.978 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:40.978 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:40.978 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:40.978 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:18:40.978 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:40.978 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:18:40.978 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:40.978 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:40.978 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:40.978 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:18:40.978 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:40.978 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:40.978 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:18:41.236 /dev/nbd0 00:18:41.236 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:41.236 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:41.236 14:00:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:18:41.236 14:00:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:18:41.236 14:00:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:41.236 14:00:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:41.236 14:00:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:18:41.236 14:00:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:18:41.236 14:00:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:41.236 14:00:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:41.236 14:00:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:41.236 1+0 records in 00:18:41.236 1+0 records out 00:18:41.236 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529585 s, 7.7 MB/s 00:18:41.236 14:00:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:41.236 14:00:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:18:41.236 14:00:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:41.236 14:00:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:41.236 14:00:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:18:41.236 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:41.236 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:41.236 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:18:41.492 /dev/nbd1 00:18:41.492 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:41.492 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:41.492 14:00:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:18:41.492 14:00:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:18:41.492 14:00:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:41.492 14:00:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:41.492 14:00:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:18:41.492 14:00:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:18:41.492 14:00:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:41.492 14:00:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:41.492 14:00:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:41.492 1+0 records in 00:18:41.492 1+0 records out 00:18:41.492 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00066088 s, 6.2 MB/s 00:18:41.492 14:00:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:41.492 14:00:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:18:41.492 14:00:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:41.492 14:00:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:41.492 14:00:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:18:41.492 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:41.492 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:41.492 14:00:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:18:41.750 /dev/nbd10 00:18:41.750 14:00:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:18:41.750 14:00:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:18:41.750 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:18:41.750 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:18:41.750 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:41.750 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:41.750 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:18:41.750 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:18:41.750 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:41.750 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:41.750 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:41.750 1+0 records in 00:18:41.750 1+0 records out 00:18:41.750 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000585934 s, 7.0 MB/s 00:18:41.750 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:41.750 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:18:41.750 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:41.750 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:41.750 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:18:41.750 14:00:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:41.750 14:00:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:41.750 14:00:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:18:42.007 /dev/nbd11 00:18:42.007 14:00:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:18:42.007 14:00:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:18:42.007 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:18:42.007 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:18:42.007 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:42.007 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:42.007 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:18:42.007 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:18:42.007 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:42.007 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:42.007 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:42.007 1+0 records in 00:18:42.007 1+0 records out 00:18:42.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495178 s, 8.3 MB/s 00:18:42.007 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:42.007 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:18:42.007 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:42.007 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:42.007 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:18:42.007 14:00:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:42.007 14:00:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:42.008 14:00:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:18:42.264 /dev/nbd12 00:18:42.264 14:00:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:18:42.522 14:00:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:18:42.522 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:18:42.522 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:18:42.522 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:42.522 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:42.522 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:18:42.522 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:18:42.522 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:42.522 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:42.522 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:42.522 1+0 records in 00:18:42.522 1+0 records out 00:18:42.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000770413 s, 5.3 MB/s 00:18:42.522 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:42.522 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:18:42.522 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:42.522 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:42.522 14:00:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:18:42.522 14:00:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:42.522 14:00:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:42.522 14:00:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:18:42.780 /dev/nbd13 00:18:42.780 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:18:42.780 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:18:42.780 14:00:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:18:42.780 14:00:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:18:42.780 14:00:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:42.780 14:00:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:42.780 14:00:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:18:42.780 14:00:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:18:42.780 14:00:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:42.780 14:00:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:42.780 14:00:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:42.780 1+0 records in 00:18:42.780 1+0 records out 00:18:42.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000674665 s, 6.1 MB/s 00:18:42.780 14:00:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:42.780 14:00:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:18:42.780 14:00:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:42.780 14:00:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:42.780 14:00:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:18:42.780 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:42.780 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:42.780 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:42.780 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:42.780 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:43.039 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:43.039 { 00:18:43.039 "nbd_device": "/dev/nbd0", 00:18:43.039 "bdev_name": "nvme0n1" 00:18:43.039 }, 00:18:43.039 { 00:18:43.039 "nbd_device": "/dev/nbd1", 00:18:43.039 "bdev_name": "nvme1n1" 00:18:43.039 }, 00:18:43.039 { 00:18:43.039 "nbd_device": "/dev/nbd10", 00:18:43.039 "bdev_name": "nvme2n1" 00:18:43.039 }, 00:18:43.039 { 00:18:43.039 "nbd_device": "/dev/nbd11", 00:18:43.039 "bdev_name": "nvme2n2" 00:18:43.039 }, 00:18:43.039 { 00:18:43.039 "nbd_device": "/dev/nbd12", 00:18:43.039 "bdev_name": "nvme2n3" 00:18:43.039 }, 00:18:43.039 { 00:18:43.039 "nbd_device": "/dev/nbd13", 00:18:43.039 "bdev_name": "nvme3n1" 00:18:43.039 } 00:18:43.039 ]' 00:18:43.039 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:43.039 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:43.039 { 00:18:43.039 "nbd_device": "/dev/nbd0", 00:18:43.039 "bdev_name": "nvme0n1" 00:18:43.039 }, 00:18:43.039 { 00:18:43.039 "nbd_device": "/dev/nbd1", 00:18:43.039 "bdev_name": "nvme1n1" 00:18:43.039 }, 00:18:43.039 { 00:18:43.039 "nbd_device": "/dev/nbd10", 00:18:43.039 "bdev_name": "nvme2n1" 00:18:43.039 }, 00:18:43.039 { 00:18:43.039 "nbd_device": "/dev/nbd11", 00:18:43.039 "bdev_name": "nvme2n2" 00:18:43.039 }, 00:18:43.039 { 00:18:43.039 "nbd_device": "/dev/nbd12", 00:18:43.039 "bdev_name": "nvme2n3" 00:18:43.039 }, 00:18:43.039 { 00:18:43.039 "nbd_device": "/dev/nbd13", 00:18:43.039 "bdev_name": "nvme3n1" 00:18:43.039 } 00:18:43.039 ]' 00:18:43.039 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:18:43.039 /dev/nbd1 00:18:43.039 /dev/nbd10 00:18:43.039 /dev/nbd11 00:18:43.039 /dev/nbd12 00:18:43.039 /dev/nbd13' 00:18:43.039 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:18:43.039 /dev/nbd1 00:18:43.039 /dev/nbd10 00:18:43.039 /dev/nbd11 00:18:43.039 /dev/nbd12 00:18:43.039 /dev/nbd13' 00:18:43.039 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:43.039 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:18:43.039 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:18:43.039 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:18:43.039 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:18:43.039 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:18:43.039 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:43.039 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:43.039 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:43.039 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:43.039 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:43.039 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:18:43.039 256+0 records in 00:18:43.039 256+0 records out 00:18:43.039 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00663137 s, 158 MB/s 00:18:43.039 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:43.039 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:43.296 256+0 records in 00:18:43.296 256+0 records out 00:18:43.297 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13446 s, 7.8 MB/s 00:18:43.297 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:43.297 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:18:43.297 256+0 records in 00:18:43.297 256+0 records out 00:18:43.297 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169291 s, 6.2 MB/s 00:18:43.297 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:43.297 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:18:43.554 256+0 records in 00:18:43.554 256+0 records out 00:18:43.554 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163691 s, 6.4 MB/s 00:18:43.554 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:43.554 14:00:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:18:43.811 256+0 records in 00:18:43.811 256+0 records out 00:18:43.811 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128188 s, 8.2 MB/s 00:18:43.811 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:43.811 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:18:43.811 256+0 records in 00:18:43.811 256+0 records out 00:18:43.811 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138192 s, 7.6 MB/s 00:18:43.811 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:43.811 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:18:44.069 256+0 records in 00:18:44.069 256+0 records out 00:18:44.069 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119314 s, 8.8 MB/s 00:18:44.069 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:18:44.069 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:44.069 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:44.069 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:44.069 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:44.070 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:44.070 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:44.070 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:44.070 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:18:44.070 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:44.070 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:18:44.070 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:44.070 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:18:44.070 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:44.070 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:18:44.070 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:44.070 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:18:44.070 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:44.070 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:18:44.070 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:44.070 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:18:44.070 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:44.070 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:44.070 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:44.070 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:44.070 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:44.070 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:44.330 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:44.330 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:44.330 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:44.330 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:44.330 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:44.330 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:44.330 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:44.330 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:44.330 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:44.330 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:44.588 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:44.588 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:44.588 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:44.588 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:44.588 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:44.588 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:44.588 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:44.588 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:44.588 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:44.588 14:00:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:18:44.845 14:00:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:18:44.845 14:00:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:18:44.845 14:00:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:18:44.845 14:00:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:44.845 14:00:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:44.845 14:00:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:18:44.845 14:00:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:44.845 14:00:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:44.845 14:00:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:44.845 14:00:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:18:45.104 14:00:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:18:45.104 14:00:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:18:45.104 14:00:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:18:45.104 14:00:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:45.104 14:00:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:45.104 14:00:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:18:45.104 14:00:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:45.104 14:00:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:45.104 14:00:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:45.104 14:00:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:18:45.361 14:00:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:18:45.361 14:00:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:18:45.361 14:00:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:18:45.361 14:00:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:45.361 14:00:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:45.361 14:00:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:18:45.361 14:00:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:45.361 14:00:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:45.361 14:00:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:45.361 14:00:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:18:45.618 14:00:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:18:45.618 14:00:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:18:45.618 14:00:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:18:45.618 14:00:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:45.618 14:00:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:45.618 14:00:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:18:45.618 14:00:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:45.618 14:00:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:45.618 14:00:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:45.618 14:00:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:45.618 14:00:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:45.874 14:00:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:45.874 14:00:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:45.874 14:00:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:46.131 14:00:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:46.131 14:00:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:46.131 14:00:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:46.131 14:00:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:46.131 14:00:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:46.131 14:00:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:46.131 14:00:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:18:46.131 14:00:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:46.131 14:00:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:18:46.131 14:00:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:18:46.131 14:00:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:46.131 14:00:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:46.132 14:00:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:18:46.132 14:00:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:18:46.132 14:00:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:18:46.389 malloc_lvol_verify 00:18:46.389 14:00:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:18:46.648 3d00c6b4-cc33-4170-8cfb-45af32a6b216 00:18:46.648 14:00:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:18:46.906 0f8c6583-85fc-470a-9a93-bb0bd71d51dd 00:18:46.906 14:00:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:18:47.163 /dev/nbd0 00:18:47.163 14:00:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:18:47.163 mke2fs 1.46.5 (30-Dec-2021) 00:18:47.163 Discarding device blocks: 0/4096 done 00:18:47.163 Creating filesystem with 4096 1k blocks and 1024 inodes 00:18:47.163 00:18:47.163 Allocating group tables: 0/1 done 00:18:47.163 Writing inode tables: 0/1 done 00:18:47.163 Creating journal (1024 blocks): done 00:18:47.163 Writing superblocks and filesystem accounting information: 0/1 done 00:18:47.163 00:18:47.163 14:00:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:18:47.163 14:00:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:47.163 14:00:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:47.163 14:00:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:47.163 14:00:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:47.163 14:00:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:47.163 14:00:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:47.163 14:00:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:47.420 14:00:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:47.420 14:00:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:47.420 14:00:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:47.420 14:00:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:47.420 14:00:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:47.420 14:00:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:47.420 14:00:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:47.420 14:00:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:47.420 14:00:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:18:47.420 14:00:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:18:47.421 14:00:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 76835 00:18:47.421 14:00:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 76835 ']' 00:18:47.421 14:00:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 76835 00:18:47.421 14:00:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:18:47.421 14:00:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:47.421 14:00:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76835 00:18:47.421 14:00:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:47.421 killing process with pid 76835 00:18:47.421 14:00:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:47.421 14:00:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76835' 00:18:47.421 14:00:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 76835 00:18:47.421 14:00:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 76835 00:18:48.791 14:00:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:18:48.791 00:18:48.791 real 0m13.242s 00:18:48.791 user 0m18.994s 00:18:48.791 sys 0m4.169s 00:18:48.791 14:00:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:48.791 ************************************ 00:18:48.791 END TEST bdev_nbd 00:18:48.791 ************************************ 00:18:48.791 14:00:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:48.791 14:00:13 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:18:48.791 14:00:13 blockdev_xnvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:18:48.791 14:00:13 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = nvme ']' 00:18:48.791 14:00:13 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = gpt ']' 00:18:48.791 14:00:13 blockdev_xnvme -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:18:48.791 14:00:13 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:48.791 14:00:13 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:48.791 14:00:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:48.791 ************************************ 00:18:48.791 START TEST bdev_fio 00:18:48.791 ************************************ 00:18:48.791 14:00:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:18:48.791 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:18:48.791 14:00:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:18:48.791 14:00:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:18:48.791 14:00:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:18:48.791 14:00:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:18:48.791 14:00:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n1]' 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n1 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme1n1]' 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme1n1 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n1]' 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n1 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n2]' 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n2 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n3]' 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n3 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme3n1]' 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme3n1 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:48.792 ************************************ 00:18:48.792 START TEST bdev_fio_rw_verify 00:18:48.792 ************************************ 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:48.792 14:00:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:49.052 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:49.052 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:49.052 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:49.052 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:49.052 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:49.052 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:49.052 fio-3.35 00:18:49.052 Starting 6 threads 00:19:01.244 00:19:01.244 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=77263: Mon Jul 15 14:00:24 2024 00:19:01.244 read: IOPS=27.0k, BW=105MiB/s (110MB/s)(1054MiB/10001msec) 00:19:01.244 slat (usec): min=3, max=509, avg= 7.82, stdev= 5.16 00:19:01.244 clat (usec): min=91, max=11334, avg=686.75, stdev=371.47 00:19:01.244 lat (usec): min=95, max=11342, avg=694.57, stdev=372.10 00:19:01.244 clat percentiles (usec): 00:19:01.244 | 50.000th=[ 676], 99.000th=[ 1598], 99.900th=[ 5407], 99.990th=[10552], 00:19:01.244 | 99.999th=[11338] 00:19:01.244 write: IOPS=27.4k, BW=107MiB/s (112MB/s)(1070MiB/10001msec); 0 zone resets 00:19:01.244 slat (usec): min=7, max=10641, avg=29.74, stdev=38.26 00:19:01.244 clat (usec): min=85, max=10916, avg=758.97, stdev=367.17 00:19:01.244 lat (usec): min=115, max=11372, avg=788.71, stdev=370.66 00:19:01.244 clat percentiles (usec): 00:19:01.244 | 50.000th=[ 742], 99.000th=[ 1696], 99.900th=[ 5407], 99.990th=[ 8094], 00:19:01.244 | 99.999th=[10945] 00:19:01.244 bw ( KiB/s): min=86991, max=138054, per=99.45%, avg=108970.21, stdev=2261.43, samples=114 00:19:01.244 iops : min=21747, max=34512, avg=27242.16, stdev=565.34, samples=114 00:19:01.245 lat (usec) : 100=0.01%, 250=2.61%, 500=20.44%, 750=33.24%, 1000=32.04% 00:19:01.245 lat (msec) : 2=11.03%, 4=0.45%, 10=0.18%, 20=0.01% 00:19:01.245 cpu : usr=60.68%, sys=26.08%, ctx=6905, majf=0, minf=23366 00:19:01.245 IO depths : 1=12.3%, 2=24.9%, 4=50.1%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.245 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.245 issued rwts: total=269786,273954,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.245 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:01.245 00:19:01.245 Run status group 0 (all jobs): 00:19:01.245 READ: bw=105MiB/s (110MB/s), 105MiB/s-105MiB/s (110MB/s-110MB/s), io=1054MiB (1105MB), run=10001-10001msec 00:19:01.245 WRITE: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=1070MiB (1122MB), run=10001-10001msec 00:19:01.245 ----------------------------------------------------- 00:19:01.245 Suppressions used: 00:19:01.245 count bytes template 00:19:01.245 6 48 /usr/src/fio/parse.c 00:19:01.245 3978 381888 /usr/src/fio/iolog.c 00:19:01.245 1 8 libtcmalloc_minimal.so 00:19:01.245 1 904 libcrypto.so 00:19:01.245 ----------------------------------------------------- 00:19:01.245 00:19:01.245 00:19:01.245 real 0m12.323s 00:19:01.245 user 0m38.277s 00:19:01.245 sys 0m15.964s 00:19:01.245 14:00:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:01.245 14:00:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:01.245 ************************************ 00:19:01.245 END TEST bdev_fio_rw_verify 00:19:01.245 ************************************ 00:19:01.245 14:00:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:19:01.245 14:00:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:19:01.245 14:00:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:01.245 14:00:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:01.245 14:00:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:01.245 14:00:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:19:01.245 14:00:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:19:01.245 14:00:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:19:01.245 14:00:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:19:01.245 14:00:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:01.245 14:00:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:19:01.245 14:00:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:19:01.245 14:00:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:01.245 14:00:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:19:01.245 14:00:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:19:01.245 14:00:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:19:01.245 14:00:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:19:01.245 14:00:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "d519e6df-ed05-42b3-9ce6-29ce5823f8d4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d519e6df-ed05-42b3-9ce6-29ce5823f8d4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "060d08e0-9784-4ad0-a4dc-44d63ef97a37"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "060d08e0-9784-4ad0-a4dc-44d63ef97a37",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "11a3092a-23a6-4eee-be96-5644a9547475"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "11a3092a-23a6-4eee-be96-5644a9547475",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "baecb7bf-3910-4371-9926-ac5bf2b8d231"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "baecb7bf-3910-4371-9926-ac5bf2b8d231",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "a196f7f8-f25e-4217-854f-04bcd3a4683b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a196f7f8-f25e-4217-854f-04bcd3a4683b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "2edc6e16-fb75-4771-a8d5-808f187fff7f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2edc6e16-fb75-4771-a8d5-808f187fff7f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:19:01.245 14:00:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:01.245 14:00:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:19:01.245 14:00:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:01.245 14:00:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # popd 00:19:01.245 /home/vagrant/spdk_repo/spdk 00:19:01.245 14:00:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:19:01.245 14:00:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@364 -- # return 0 00:19:01.245 00:19:01.245 real 0m12.479s 00:19:01.245 user 0m38.366s 00:19:01.245 sys 0m16.032s 00:19:01.245 14:00:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:01.245 ************************************ 00:19:01.245 END TEST bdev_fio 00:19:01.245 ************************************ 00:19:01.245 14:00:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:01.245 14:00:25 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:19:01.245 14:00:25 blockdev_xnvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:01.245 14:00:25 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:01.245 14:00:25 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:19:01.245 14:00:25 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:01.245 14:00:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:01.245 ************************************ 00:19:01.245 START TEST bdev_verify 00:19:01.245 ************************************ 00:19:01.245 14:00:25 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:01.245 [2024-07-15 14:00:25.732144] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:19:01.245 [2024-07-15 14:00:25.732348] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77438 ] 00:19:01.541 [2024-07-15 14:00:25.907246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:01.800 [2024-07-15 14:00:26.173447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.800 [2024-07-15 14:00:26.173455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.084 Running I/O for 5 seconds... 00:19:07.349 00:19:07.349 Latency(us) 00:19:07.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.349 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:07.349 Verification LBA range: start 0x0 length 0xa0000 00:19:07.349 nvme0n1 : 5.05 1673.65 6.54 0.00 0.00 76340.91 4676.89 81502.95 00:19:07.349 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:07.349 Verification LBA range: start 0xa0000 length 0xa0000 00:19:07.349 nvme0n1 : 5.06 1594.29 6.23 0.00 0.00 80127.00 15728.64 88175.71 00:19:07.349 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:07.349 Verification LBA range: start 0x0 length 0xbd0bd 00:19:07.349 nvme1n1 : 5.05 2691.41 10.51 0.00 0.00 47296.02 4379.00 103427.72 00:19:07.349 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:07.349 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:19:07.349 nvme1n1 : 5.05 2577.94 10.07 0.00 0.00 49411.84 3961.95 117726.49 00:19:07.349 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:07.349 Verification LBA range: start 0x0 length 0x80000 00:19:07.349 nvme2n1 : 5.06 1669.82 6.52 0.00 0.00 76185.99 6583.39 95801.72 00:19:07.349 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:07.349 Verification LBA range: start 0x80000 length 0x80000 00:19:07.349 nvme2n1 : 5.06 1618.98 6.32 0.00 0.00 78564.60 8102.63 86269.21 00:19:07.349 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:07.349 Verification LBA range: start 0x0 length 0x80000 00:19:07.349 nvme2n2 : 5.06 1645.33 6.43 0.00 0.00 77173.47 10545.34 78643.20 00:19:07.349 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:07.349 Verification LBA range: start 0x80000 length 0x80000 00:19:07.349 nvme2n2 : 5.05 1597.52 6.24 0.00 0.00 79474.35 11498.59 78166.57 00:19:07.349 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:07.349 Verification LBA range: start 0x0 length 0x80000 00:19:07.349 nvme2n3 : 5.05 1646.64 6.43 0.00 0.00 76954.37 18945.86 70540.57 00:19:07.349 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:07.349 Verification LBA range: start 0x80000 length 0x80000 00:19:07.349 nvme2n3 : 5.05 1596.37 6.24 0.00 0.00 79395.01 14000.87 73400.32 00:19:07.349 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:07.349 Verification LBA range: start 0x0 length 0x20000 00:19:07.349 nvme3n1 : 5.06 1669.02 6.52 0.00 0.00 75788.09 6911.07 75783.45 00:19:07.349 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:07.349 Verification LBA range: start 0x20000 length 0x20000 00:19:07.349 nvme3n1 : 5.07 1616.49 6.31 0.00 0.00 78263.90 1966.08 83886.08 00:19:07.349 =================================================================================================================== 00:19:07.349 Total : 21597.45 84.37 0.00 0.00 70616.24 1966.08 117726.49 00:19:08.720 00:19:08.720 real 0m7.292s 00:19:08.720 user 0m11.318s 00:19:08.720 sys 0m1.782s 00:19:08.720 14:00:32 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:08.720 14:00:32 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:08.720 ************************************ 00:19:08.720 END TEST bdev_verify 00:19:08.720 ************************************ 00:19:08.720 14:00:32 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:19:08.720 14:00:32 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:08.720 14:00:32 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:19:08.720 14:00:32 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:08.720 14:00:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:08.720 ************************************ 00:19:08.720 START TEST bdev_verify_big_io 00:19:08.720 ************************************ 00:19:08.720 14:00:32 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:08.720 [2024-07-15 14:00:33.078500] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:19:08.720 [2024-07-15 14:00:33.078715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77541 ] 00:19:08.720 [2024-07-15 14:00:33.258832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:08.976 [2024-07-15 14:00:33.446333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.976 [2024-07-15 14:00:33.446351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.541 Running I/O for 5 seconds... 00:19:16.096 00:19:16.096 Latency(us) 00:19:16.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.096 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:16.096 Verification LBA range: start 0x0 length 0xa000 00:19:16.096 nvme0n1 : 5.82 148.49 9.28 0.00 0.00 835153.71 122969.37 823608.79 00:19:16.096 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:16.096 Verification LBA range: start 0xa000 length 0xa000 00:19:16.096 nvme0n1 : 5.98 115.14 7.20 0.00 0.00 1083247.73 76260.07 1273543.21 00:19:16.096 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:16.096 Verification LBA range: start 0x0 length 0xbd0b 00:19:16.096 nvme1n1 : 5.94 134.73 8.42 0.00 0.00 875063.04 41943.04 819795.78 00:19:16.096 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:16.096 Verification LBA range: start 0xbd0b length 0xbd0b 00:19:16.096 nvme1n1 : 5.83 127.92 7.99 0.00 0.00 940477.54 79119.83 1014258.97 00:19:16.096 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:16.096 Verification LBA range: start 0x0 length 0x8000 00:19:16.096 nvme2n1 : 5.98 93.71 5.86 0.00 0.00 1274102.40 138221.38 2684354.56 00:19:16.096 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:16.096 Verification LBA range: start 0x8000 length 0x8000 00:19:16.096 nvme2n1 : 5.84 135.72 8.48 0.00 0.00 857421.51 77689.95 1372681.31 00:19:16.096 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:16.096 Verification LBA range: start 0x0 length 0x8000 00:19:16.096 nvme2n2 : 5.94 126.51 7.91 0.00 0.00 907086.69 116773.24 2181038.08 00:19:16.096 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:16.096 Verification LBA range: start 0x8000 length 0x8000 00:19:16.096 nvme2n2 : 5.99 140.30 8.77 0.00 0.00 784281.78 4468.36 911307.87 00:19:16.096 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:16.096 Verification LBA range: start 0x0 length 0x8000 00:19:16.096 nvme2n3 : 5.95 184.21 11.51 0.00 0.00 606663.32 94371.84 812169.77 00:19:16.096 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:16.096 Verification LBA range: start 0x8000 length 0x8000 00:19:16.096 nvme2n3 : 6.00 106.72 6.67 0.00 0.00 1030121.43 6076.97 2150534.05 00:19:16.096 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:16.096 Verification LBA range: start 0x0 length 0x2000 00:19:16.096 nvme3n1 : 5.97 126.02 7.88 0.00 0.00 878918.25 8579.26 2653850.53 00:19:16.096 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:16.096 Verification LBA range: start 0x2000 length 0x2000 00:19:16.096 nvme3n1 : 5.99 117.48 7.34 0.00 0.00 898346.29 7328.12 2958890.82 00:19:16.096 =================================================================================================================== 00:19:16.096 Total : 1556.94 97.31 0.00 0.00 890140.57 4468.36 2958890.82 00:19:17.027 00:19:17.027 real 0m8.408s 00:19:17.027 user 0m15.065s 00:19:17.027 sys 0m0.492s 00:19:17.027 14:00:41 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:17.027 14:00:41 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:17.027 ************************************ 00:19:17.027 END TEST bdev_verify_big_io 00:19:17.027 ************************************ 00:19:17.027 14:00:41 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:19:17.027 14:00:41 blockdev_xnvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:17.027 14:00:41 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:19:17.027 14:00:41 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:17.027 14:00:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:17.027 ************************************ 00:19:17.027 START TEST bdev_write_zeroes 00:19:17.027 ************************************ 00:19:17.027 14:00:41 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:17.027 [2024-07-15 14:00:41.507021] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:19:17.027 [2024-07-15 14:00:41.507183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77651 ] 00:19:17.285 [2024-07-15 14:00:41.679587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.543 [2024-07-15 14:00:41.935010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.799 Running I/O for 1 seconds... 00:19:19.175 00:19:19.175 Latency(us) 00:19:19.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.175 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:19.175 nvme0n1 : 1.01 10061.54 39.30 0.00 0.00 12709.20 7417.48 21090.68 00:19:19.175 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:19.175 nvme1n1 : 1.01 15124.54 59.08 0.00 0.00 8444.97 5272.67 14596.65 00:19:19.175 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:19.175 nvme2n1 : 1.02 10059.19 39.29 0.00 0.00 12662.21 5123.72 21567.30 00:19:19.175 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:19.175 nvme2n2 : 1.02 10044.20 39.24 0.00 0.00 12668.00 5093.93 21209.83 00:19:19.175 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:19.175 nvme2n3 : 1.02 10084.19 39.39 0.00 0.00 12608.22 5183.30 20852.36 00:19:19.175 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:19.175 nvme3n1 : 1.02 10029.16 39.18 0.00 0.00 12667.02 5242.88 20256.58 00:19:19.175 =================================================================================================================== 00:19:19.175 Total : 65402.82 255.48 0.00 0.00 11689.15 5093.93 21567.30 00:19:20.108 00:19:20.108 real 0m3.166s 00:19:20.108 user 0m2.413s 00:19:20.108 sys 0m0.576s 00:19:20.108 14:00:44 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:20.108 14:00:44 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:20.108 ************************************ 00:19:20.108 END TEST bdev_write_zeroes 00:19:20.108 ************************************ 00:19:20.108 14:00:44 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:19:20.108 14:00:44 blockdev_xnvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:20.108 14:00:44 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:19:20.108 14:00:44 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:20.108 14:00:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:20.108 ************************************ 00:19:20.108 START TEST bdev_json_nonenclosed 00:19:20.108 ************************************ 00:19:20.108 14:00:44 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:20.365 [2024-07-15 14:00:44.717884] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:19:20.365 [2024-07-15 14:00:44.718067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77710 ] 00:19:20.365 [2024-07-15 14:00:44.881358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.622 [2024-07-15 14:00:45.071570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.622 [2024-07-15 14:00:45.071678] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:20.622 [2024-07-15 14:00:45.071704] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:20.622 [2024-07-15 14:00:45.071722] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:21.194 00:19:21.194 real 0m0.861s 00:19:21.194 user 0m0.646s 00:19:21.194 sys 0m0.108s 00:19:21.194 14:00:45 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:19:21.194 14:00:45 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:21.194 14:00:45 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:21.194 ************************************ 00:19:21.194 END TEST bdev_json_nonenclosed 00:19:21.194 ************************************ 00:19:21.194 14:00:45 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 234 00:19:21.194 14:00:45 blockdev_xnvme -- bdev/blockdev.sh@782 -- # true 00:19:21.194 14:00:45 blockdev_xnvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:21.194 14:00:45 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:19:21.194 14:00:45 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:21.194 14:00:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:21.194 ************************************ 00:19:21.194 START TEST bdev_json_nonarray 00:19:21.194 ************************************ 00:19:21.194 14:00:45 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:21.194 [2024-07-15 14:00:45.629189] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:19:21.194 [2024-07-15 14:00:45.629377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77736 ] 00:19:21.452 [2024-07-15 14:00:45.795474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.749 [2024-07-15 14:00:46.002531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.749 [2024-07-15 14:00:46.002659] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:21.749 [2024-07-15 14:00:46.002686] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:21.749 [2024-07-15 14:00:46.002703] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:22.006 00:19:22.006 real 0m0.878s 00:19:22.006 user 0m0.643s 00:19:22.006 sys 0m0.128s 00:19:22.006 14:00:46 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:19:22.006 14:00:46 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:22.006 14:00:46 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:22.006 ************************************ 00:19:22.006 END TEST bdev_json_nonarray 00:19:22.006 ************************************ 00:19:22.006 14:00:46 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 234 00:19:22.006 14:00:46 blockdev_xnvme -- bdev/blockdev.sh@785 -- # true 00:19:22.006 14:00:46 blockdev_xnvme -- bdev/blockdev.sh@787 -- # [[ xnvme == bdev ]] 00:19:22.006 14:00:46 blockdev_xnvme -- bdev/blockdev.sh@794 -- # [[ xnvme == gpt ]] 00:19:22.006 14:00:46 blockdev_xnvme -- bdev/blockdev.sh@798 -- # [[ xnvme == crypto_sw ]] 00:19:22.006 14:00:46 blockdev_xnvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:19:22.006 14:00:46 blockdev_xnvme -- bdev/blockdev.sh@811 -- # cleanup 00:19:22.006 14:00:46 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:22.006 14:00:46 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:22.006 14:00:46 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:19:22.006 14:00:46 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:19:22.006 14:00:46 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:19:22.006 14:00:46 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:19:22.006 14:00:46 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:22.571 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:27.930 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:27.930 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:19:27.930 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:28.188 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:19:28.188 00:19:28.188 real 1m7.335s 00:19:28.188 user 1m46.924s 00:19:28.188 sys 0m42.219s 00:19:28.188 14:00:52 blockdev_xnvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:28.188 14:00:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:28.188 ************************************ 00:19:28.188 END TEST blockdev_xnvme 00:19:28.188 ************************************ 00:19:28.188 14:00:52 -- common/autotest_common.sh@1142 -- # return 0 00:19:28.188 14:00:52 -- spdk/autotest.sh@251 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:19:28.188 14:00:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:28.188 14:00:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:28.188 14:00:52 -- common/autotest_common.sh@10 -- # set +x 00:19:28.188 ************************************ 00:19:28.188 START TEST ublk 00:19:28.188 ************************************ 00:19:28.188 14:00:52 ublk -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:19:28.188 * Looking for test storage... 00:19:28.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:19:28.188 14:00:52 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:19:28.188 14:00:52 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:19:28.188 14:00:52 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:19:28.188 14:00:52 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:19:28.188 14:00:52 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:19:28.188 14:00:52 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:19:28.188 14:00:52 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:19:28.188 14:00:52 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:19:28.188 14:00:52 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:19:28.188 14:00:52 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:19:28.188 14:00:52 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:19:28.188 14:00:52 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:19:28.188 14:00:52 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:19:28.188 14:00:52 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:19:28.188 14:00:52 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:19:28.188 14:00:52 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:19:28.188 14:00:52 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:19:28.188 14:00:52 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:19:28.188 14:00:52 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:19:28.188 14:00:52 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:19:28.188 14:00:52 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:28.188 14:00:52 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:28.188 14:00:52 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:28.188 ************************************ 00:19:28.188 START TEST test_save_ublk_config 00:19:28.188 ************************************ 00:19:28.188 14:00:52 ublk.test_save_ublk_config -- common/autotest_common.sh@1123 -- # test_save_config 00:19:28.188 14:00:52 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:19:28.188 14:00:52 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=78033 00:19:28.188 14:00:52 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:19:28.188 14:00:52 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 78033 00:19:28.188 14:00:52 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:19:28.188 14:00:52 ublk.test_save_ublk_config -- common/autotest_common.sh@829 -- # '[' -z 78033 ']' 00:19:28.188 14:00:52 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.188 14:00:52 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:28.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.188 14:00:52 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.188 14:00:52 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:28.188 14:00:52 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:28.445 [2024-07-15 14:00:52.782664] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:19:28.446 [2024-07-15 14:00:52.782834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78033 ] 00:19:28.446 [2024-07-15 14:00:52.948459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.702 [2024-07-15 14:00:53.148068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.632 14:00:53 ublk.test_save_ublk_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:29.632 14:00:53 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # return 0 00:19:29.632 14:00:53 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:19:29.632 14:00:53 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:19:29.632 14:00:53 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.632 14:00:53 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:29.633 [2024-07-15 14:00:53.885341] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:29.633 [2024-07-15 14:00:53.886490] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:29.633 malloc0 00:19:29.633 [2024-07-15 14:00:53.965494] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:19:29.633 [2024-07-15 14:00:53.965622] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:19:29.633 [2024-07-15 14:00:53.965642] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:29.633 [2024-07-15 14:00:53.965660] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:29.633 [2024-07-15 14:00:53.973526] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:29.633 [2024-07-15 14:00:53.973575] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:29.633 [2024-07-15 14:00:53.981348] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:29.633 [2024-07-15 14:00:53.981502] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:29.633 [2024-07-15 14:00:53.998338] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:29.633 0 00:19:29.633 14:00:54 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.633 14:00:54 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:19:29.633 14:00:54 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.633 14:00:54 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:29.891 14:00:54 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.891 14:00:54 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:19:29.891 "subsystems": [ 00:19:29.891 { 00:19:29.891 "subsystem": "keyring", 00:19:29.892 "config": [] 00:19:29.892 }, 00:19:29.892 { 00:19:29.892 "subsystem": "iobuf", 00:19:29.892 "config": [ 00:19:29.892 { 00:19:29.892 "method": "iobuf_set_options", 00:19:29.892 "params": { 00:19:29.892 "small_pool_count": 8192, 00:19:29.892 "large_pool_count": 1024, 00:19:29.892 "small_bufsize": 8192, 00:19:29.892 "large_bufsize": 135168 00:19:29.892 } 00:19:29.892 } 00:19:29.892 ] 00:19:29.892 }, 00:19:29.892 { 00:19:29.892 "subsystem": "sock", 00:19:29.892 "config": [ 00:19:29.892 { 00:19:29.892 "method": "sock_set_default_impl", 00:19:29.892 "params": { 00:19:29.892 "impl_name": "posix" 00:19:29.892 } 00:19:29.892 }, 00:19:29.892 { 00:19:29.892 "method": "sock_impl_set_options", 00:19:29.892 "params": { 00:19:29.892 "impl_name": "ssl", 00:19:29.892 "recv_buf_size": 4096, 00:19:29.892 "send_buf_size": 4096, 00:19:29.892 "enable_recv_pipe": true, 00:19:29.892 "enable_quickack": false, 00:19:29.892 "enable_placement_id": 0, 00:19:29.892 "enable_zerocopy_send_server": true, 00:19:29.892 "enable_zerocopy_send_client": false, 00:19:29.892 "zerocopy_threshold": 0, 00:19:29.892 "tls_version": 0, 00:19:29.892 "enable_ktls": false 00:19:29.892 } 00:19:29.892 }, 00:19:29.892 { 00:19:29.892 "method": "sock_impl_set_options", 00:19:29.892 "params": { 00:19:29.892 "impl_name": "posix", 00:19:29.892 "recv_buf_size": 2097152, 00:19:29.892 "send_buf_size": 2097152, 00:19:29.892 "enable_recv_pipe": true, 00:19:29.892 "enable_quickack": false, 00:19:29.892 "enable_placement_id": 0, 00:19:29.892 "enable_zerocopy_send_server": true, 00:19:29.892 "enable_zerocopy_send_client": false, 00:19:29.892 "zerocopy_threshold": 0, 00:19:29.892 "tls_version": 0, 00:19:29.892 "enable_ktls": false 00:19:29.892 } 00:19:29.892 } 00:19:29.892 ] 00:19:29.892 }, 00:19:29.892 { 00:19:29.892 "subsystem": "vmd", 00:19:29.892 "config": [] 00:19:29.892 }, 00:19:29.892 { 00:19:29.892 "subsystem": "accel", 00:19:29.892 "config": [ 00:19:29.892 { 00:19:29.892 "method": "accel_set_options", 00:19:29.892 "params": { 00:19:29.892 "small_cache_size": 128, 00:19:29.892 "large_cache_size": 16, 00:19:29.892 "task_count": 2048, 00:19:29.892 "sequence_count": 2048, 00:19:29.892 "buf_count": 2048 00:19:29.892 } 00:19:29.892 } 00:19:29.892 ] 00:19:29.892 }, 00:19:29.892 { 00:19:29.892 "subsystem": "bdev", 00:19:29.892 "config": [ 00:19:29.892 { 00:19:29.892 "method": "bdev_set_options", 00:19:29.892 "params": { 00:19:29.892 "bdev_io_pool_size": 65535, 00:19:29.892 "bdev_io_cache_size": 256, 00:19:29.892 "bdev_auto_examine": true, 00:19:29.892 "iobuf_small_cache_size": 128, 00:19:29.892 "iobuf_large_cache_size": 16 00:19:29.892 } 00:19:29.892 }, 00:19:29.892 { 00:19:29.892 "method": "bdev_raid_set_options", 00:19:29.892 "params": { 00:19:29.892 "process_window_size_kb": 1024 00:19:29.892 } 00:19:29.892 }, 00:19:29.892 { 00:19:29.892 "method": "bdev_iscsi_set_options", 00:19:29.892 "params": { 00:19:29.892 "timeout_sec": 30 00:19:29.892 } 00:19:29.892 }, 00:19:29.892 { 00:19:29.892 "method": "bdev_nvme_set_options", 00:19:29.892 "params": { 00:19:29.892 "action_on_timeout": "none", 00:19:29.892 "timeout_us": 0, 00:19:29.892 "timeout_admin_us": 0, 00:19:29.892 "keep_alive_timeout_ms": 10000, 00:19:29.892 "arbitration_burst": 0, 00:19:29.892 "low_priority_weight": 0, 00:19:29.892 "medium_priority_weight": 0, 00:19:29.892 "high_priority_weight": 0, 00:19:29.892 "nvme_adminq_poll_period_us": 10000, 00:19:29.892 "nvme_ioq_poll_period_us": 0, 00:19:29.892 "io_queue_requests": 0, 00:19:29.892 "delay_cmd_submit": true, 00:19:29.892 "transport_retry_count": 4, 00:19:29.892 "bdev_retry_count": 3, 00:19:29.892 "transport_ack_timeout": 0, 00:19:29.892 "ctrlr_loss_timeout_sec": 0, 00:19:29.892 "reconnect_delay_sec": 0, 00:19:29.892 "fast_io_fail_timeout_sec": 0, 00:19:29.892 "disable_auto_failback": false, 00:19:29.892 "generate_uuids": false, 00:19:29.892 "transport_tos": 0, 00:19:29.892 "nvme_error_stat": false, 00:19:29.892 "rdma_srq_size": 0, 00:19:29.892 "io_path_stat": false, 00:19:29.892 "allow_accel_sequence": false, 00:19:29.892 "rdma_max_cq_size": 0, 00:19:29.892 "rdma_cm_event_timeout_ms": 0, 00:19:29.892 "dhchap_digests": [ 00:19:29.892 "sha256", 00:19:29.892 "sha384", 00:19:29.892 "sha512" 00:19:29.892 ], 00:19:29.892 "dhchap_dhgroups": [ 00:19:29.892 "null", 00:19:29.892 "ffdhe2048", 00:19:29.892 "ffdhe3072", 00:19:29.892 "ffdhe4096", 00:19:29.892 "ffdhe6144", 00:19:29.892 "ffdhe8192" 00:19:29.892 ] 00:19:29.892 } 00:19:29.892 }, 00:19:29.892 { 00:19:29.892 "method": "bdev_nvme_set_hotplug", 00:19:29.892 "params": { 00:19:29.892 "period_us": 100000, 00:19:29.892 "enable": false 00:19:29.892 } 00:19:29.892 }, 00:19:29.892 { 00:19:29.892 "method": "bdev_malloc_create", 00:19:29.892 "params": { 00:19:29.892 "name": "malloc0", 00:19:29.892 "num_blocks": 8192, 00:19:29.892 "block_size": 4096, 00:19:29.892 "physical_block_size": 4096, 00:19:29.892 "uuid": "70c20789-4191-4753-89da-8e8a46f0dbc8", 00:19:29.892 "optimal_io_boundary": 0 00:19:29.892 } 00:19:29.892 }, 00:19:29.892 { 00:19:29.892 "method": "bdev_wait_for_examine" 00:19:29.892 } 00:19:29.892 ] 00:19:29.892 }, 00:19:29.892 { 00:19:29.892 "subsystem": "scsi", 00:19:29.892 "config": null 00:19:29.892 }, 00:19:29.892 { 00:19:29.892 "subsystem": "scheduler", 00:19:29.892 "config": [ 00:19:29.892 { 00:19:29.892 "method": "framework_set_scheduler", 00:19:29.892 "params": { 00:19:29.892 "name": "static" 00:19:29.892 } 00:19:29.892 } 00:19:29.892 ] 00:19:29.892 }, 00:19:29.892 { 00:19:29.892 "subsystem": "vhost_scsi", 00:19:29.892 "config": [] 00:19:29.892 }, 00:19:29.892 { 00:19:29.892 "subsystem": "vhost_blk", 00:19:29.892 "config": [] 00:19:29.892 }, 00:19:29.892 { 00:19:29.892 "subsystem": "ublk", 00:19:29.892 "config": [ 00:19:29.892 { 00:19:29.892 "method": "ublk_create_target", 00:19:29.892 "params": { 00:19:29.892 "cpumask": "1" 00:19:29.892 } 00:19:29.892 }, 00:19:29.892 { 00:19:29.892 "method": "ublk_start_disk", 00:19:29.892 "params": { 00:19:29.892 "bdev_name": "malloc0", 00:19:29.892 "ublk_id": 0, 00:19:29.892 "num_queues": 1, 00:19:29.892 "queue_depth": 128 00:19:29.892 } 00:19:29.892 } 00:19:29.892 ] 00:19:29.892 }, 00:19:29.892 { 00:19:29.892 "subsystem": "nbd", 00:19:29.892 "config": [] 00:19:29.892 }, 00:19:29.892 { 00:19:29.892 "subsystem": "nvmf", 00:19:29.892 "config": [ 00:19:29.892 { 00:19:29.892 "method": "nvmf_set_config", 00:19:29.892 "params": { 00:19:29.892 "discovery_filter": "match_any", 00:19:29.892 "admin_cmd_passthru": { 00:19:29.892 "identify_ctrlr": false 00:19:29.892 } 00:19:29.892 } 00:19:29.892 }, 00:19:29.892 { 00:19:29.892 "method": "nvmf_set_max_subsystems", 00:19:29.892 "params": { 00:19:29.892 "max_subsystems": 1024 00:19:29.892 } 00:19:29.892 }, 00:19:29.892 { 00:19:29.892 "method": "nvmf_set_crdt", 00:19:29.892 "params": { 00:19:29.892 "crdt1": 0, 00:19:29.892 "crdt2": 0, 00:19:29.892 "crdt3": 0 00:19:29.892 } 00:19:29.892 } 00:19:29.892 ] 00:19:29.892 }, 00:19:29.892 { 00:19:29.892 "subsystem": "iscsi", 00:19:29.892 "config": [ 00:19:29.892 { 00:19:29.892 "method": "iscsi_set_options", 00:19:29.892 "params": { 00:19:29.892 "node_base": "iqn.2016-06.io.spdk", 00:19:29.892 "max_sessions": 128, 00:19:29.892 "max_connections_per_session": 2, 00:19:29.892 "max_queue_depth": 64, 00:19:29.892 "default_time2wait": 2, 00:19:29.892 "default_time2retain": 20, 00:19:29.892 "first_burst_length": 8192, 00:19:29.892 "immediate_data": true, 00:19:29.892 "allow_duplicated_isid": false, 00:19:29.892 "error_recovery_level": 0, 00:19:29.892 "nop_timeout": 60, 00:19:29.892 "nop_in_interval": 30, 00:19:29.892 "disable_chap": false, 00:19:29.892 "require_chap": false, 00:19:29.892 "mutual_chap": false, 00:19:29.892 "chap_group": 0, 00:19:29.892 "max_large_datain_per_connection": 64, 00:19:29.892 "max_r2t_per_connection": 4, 00:19:29.892 "pdu_pool_size": 36864, 00:19:29.892 "immediate_data_pool_size": 16384, 00:19:29.892 "data_out_pool_size": 2048 00:19:29.892 } 00:19:29.892 } 00:19:29.892 ] 00:19:29.892 } 00:19:29.892 ] 00:19:29.892 }' 00:19:29.892 14:00:54 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 78033 00:19:29.892 14:00:54 ublk.test_save_ublk_config -- common/autotest_common.sh@948 -- # '[' -z 78033 ']' 00:19:29.892 14:00:54 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # kill -0 78033 00:19:29.892 14:00:54 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # uname 00:19:29.892 14:00:54 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:29.892 14:00:54 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78033 00:19:29.892 14:00:54 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:29.892 killing process with pid 78033 00:19:29.892 14:00:54 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:29.892 14:00:54 ublk.test_save_ublk_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78033' 00:19:29.892 14:00:54 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # kill 78033 00:19:29.892 14:00:54 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # wait 78033 00:19:31.267 [2024-07-15 14:00:55.612624] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:31.268 [2024-07-15 14:00:55.644441] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:31.268 [2024-07-15 14:00:55.644667] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:31.268 [2024-07-15 14:00:55.655348] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:31.268 [2024-07-15 14:00:55.655431] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:31.268 [2024-07-15 14:00:55.655446] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:31.268 [2024-07-15 14:00:55.655484] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:19:31.268 [2024-07-15 14:00:55.655691] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:19:32.640 14:00:56 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=78088 00:19:32.640 14:00:56 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 78088 00:19:32.640 14:00:56 ublk.test_save_ublk_config -- common/autotest_common.sh@829 -- # '[' -z 78088 ']' 00:19:32.640 14:00:56 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.640 14:00:56 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:32.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.640 14:00:56 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.640 14:00:56 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:32.640 14:00:56 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:19:32.640 14:00:56 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:32.640 14:00:56 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:19:32.640 "subsystems": [ 00:19:32.640 { 00:19:32.640 "subsystem": "keyring", 00:19:32.640 "config": [] 00:19:32.640 }, 00:19:32.640 { 00:19:32.640 "subsystem": "iobuf", 00:19:32.640 "config": [ 00:19:32.640 { 00:19:32.640 "method": "iobuf_set_options", 00:19:32.640 "params": { 00:19:32.640 "small_pool_count": 8192, 00:19:32.640 "large_pool_count": 1024, 00:19:32.640 "small_bufsize": 8192, 00:19:32.640 "large_bufsize": 135168 00:19:32.640 } 00:19:32.640 } 00:19:32.640 ] 00:19:32.640 }, 00:19:32.640 { 00:19:32.640 "subsystem": "sock", 00:19:32.640 "config": [ 00:19:32.640 { 00:19:32.640 "method": "sock_set_default_impl", 00:19:32.640 "params": { 00:19:32.640 "impl_name": "posix" 00:19:32.640 } 00:19:32.640 }, 00:19:32.640 { 00:19:32.640 "method": "sock_impl_set_options", 00:19:32.640 "params": { 00:19:32.640 "impl_name": "ssl", 00:19:32.640 "recv_buf_size": 4096, 00:19:32.640 "send_buf_size": 4096, 00:19:32.640 "enable_recv_pipe": true, 00:19:32.640 "enable_quickack": false, 00:19:32.640 "enable_placement_id": 0, 00:19:32.640 "enable_zerocopy_send_server": true, 00:19:32.640 "enable_zerocopy_send_client": false, 00:19:32.640 "zerocopy_threshold": 0, 00:19:32.640 "tls_version": 0, 00:19:32.640 "enable_ktls": false 00:19:32.640 } 00:19:32.640 }, 00:19:32.640 { 00:19:32.640 "method": "sock_impl_set_options", 00:19:32.640 "params": { 00:19:32.640 "impl_name": "posix", 00:19:32.640 "recv_buf_size": 2097152, 00:19:32.640 "send_buf_size": 2097152, 00:19:32.640 "enable_recv_pipe": true, 00:19:32.640 "enable_quickack": false, 00:19:32.640 "enable_placement_id": 0, 00:19:32.640 "enable_zerocopy_send_server": true, 00:19:32.641 "enable_zerocopy_send_client": false, 00:19:32.641 "zerocopy_threshold": 0, 00:19:32.641 "tls_version": 0, 00:19:32.641 "enable_ktls": false 00:19:32.641 } 00:19:32.641 } 00:19:32.641 ] 00:19:32.641 }, 00:19:32.641 { 00:19:32.641 "subsystem": "vmd", 00:19:32.641 "config": [] 00:19:32.641 }, 00:19:32.641 { 00:19:32.641 "subsystem": "accel", 00:19:32.641 "config": [ 00:19:32.641 { 00:19:32.641 "method": "accel_set_options", 00:19:32.641 "params": { 00:19:32.641 "small_cache_size": 128, 00:19:32.641 "large_cache_size": 16, 00:19:32.641 "task_count": 2048, 00:19:32.641 "sequence_count": 2048, 00:19:32.641 "buf_count": 2048 00:19:32.641 } 00:19:32.641 } 00:19:32.641 ] 00:19:32.641 }, 00:19:32.641 { 00:19:32.641 "subsystem": "bdev", 00:19:32.641 "config": [ 00:19:32.641 { 00:19:32.641 "method": "bdev_set_options", 00:19:32.641 "params": { 00:19:32.641 "bdev_io_pool_size": 65535, 00:19:32.641 "bdev_io_cache_size": 256, 00:19:32.641 "bdev_auto_examine": true, 00:19:32.641 "iobuf_small_cache_size": 128, 00:19:32.641 "iobuf_large_cache_size": 16 00:19:32.641 } 00:19:32.641 }, 00:19:32.641 { 00:19:32.641 "method": "bdev_raid_set_options", 00:19:32.641 "params": { 00:19:32.641 "process_window_size_kb": 1024 00:19:32.641 } 00:19:32.641 }, 00:19:32.641 { 00:19:32.641 "method": "bdev_iscsi_set_options", 00:19:32.641 "params": { 00:19:32.641 "timeout_sec": 30 00:19:32.641 } 00:19:32.641 }, 00:19:32.641 { 00:19:32.641 "method": "bdev_nvme_set_options", 00:19:32.641 "params": { 00:19:32.641 "action_on_timeout": "none", 00:19:32.641 "timeout_us": 0, 00:19:32.641 "timeout_admin_us": 0, 00:19:32.641 "keep_alive_timeout_ms": 10000, 00:19:32.641 "arbitration_burst": 0, 00:19:32.641 "low_priority_weight": 0, 00:19:32.641 "medium_priority_weight": 0, 00:19:32.641 "high_priority_weight": 0, 00:19:32.641 "nvme_adminq_poll_period_us": 10000, 00:19:32.641 "nvme_ioq_poll_period_us": 0, 00:19:32.641 "io_queue_requests": 0, 00:19:32.641 "delay_cmd_submit": true, 00:19:32.641 "transport_retry_count": 4, 00:19:32.641 "bdev_retry_count": 3, 00:19:32.641 "transport_ack_timeout": 0, 00:19:32.641 "ctrlr_loss_timeout_sec": 0, 00:19:32.641 "reconnect_delay_sec": 0, 00:19:32.641 "fast_io_fail_timeout_sec": 0, 00:19:32.641 "disable_auto_failback": false, 00:19:32.641 "generate_uuids": false, 00:19:32.641 "transport_tos": 0, 00:19:32.641 "nvme_error_stat": false, 00:19:32.641 "rdma_srq_size": 0, 00:19:32.641 "io_path_stat": false, 00:19:32.641 "allow_accel_sequence": false, 00:19:32.641 "rdma_max_cq_size": 0, 00:19:32.641 "rdma_cm_event_timeout_ms": 0, 00:19:32.641 "dhchap_digests": [ 00:19:32.641 "sha256", 00:19:32.641 "sha384", 00:19:32.641 "sha512" 00:19:32.641 ], 00:19:32.641 "dhchap_dhgroups": [ 00:19:32.641 "null", 00:19:32.641 "ffdhe2048", 00:19:32.641 "ffdhe3072", 00:19:32.641 "ffdhe4096", 00:19:32.641 "ffdhe6144", 00:19:32.641 "ffdhe8192" 00:19:32.641 ] 00:19:32.641 } 00:19:32.641 }, 00:19:32.641 { 00:19:32.641 "method": "bdev_nvme_set_hotplug", 00:19:32.641 "params": { 00:19:32.641 "period_us": 100000, 00:19:32.641 "enable": false 00:19:32.641 } 00:19:32.641 }, 00:19:32.641 { 00:19:32.641 "method": "bdev_malloc_create", 00:19:32.641 "params": { 00:19:32.641 "name": "malloc0", 00:19:32.641 "num_blocks": 8192, 00:19:32.641 "block_size": 4096, 00:19:32.641 "physical_block_size": 4096, 00:19:32.641 "uuid": "70c20789-4191-4753-89da-8e8a46f0dbc8", 00:19:32.641 "optimal_io_boundary": 0 00:19:32.641 } 00:19:32.641 }, 00:19:32.641 { 00:19:32.641 "method": "bdev_wait_for_examine" 00:19:32.641 } 00:19:32.641 ] 00:19:32.641 }, 00:19:32.641 { 00:19:32.641 "subsystem": "scsi", 00:19:32.641 "config": null 00:19:32.641 }, 00:19:32.641 { 00:19:32.641 "subsystem": "scheduler", 00:19:32.641 "config": [ 00:19:32.641 { 00:19:32.641 "method": "framework_set_scheduler", 00:19:32.641 "params": { 00:19:32.641 "name": "static" 00:19:32.641 } 00:19:32.641 } 00:19:32.641 ] 00:19:32.641 }, 00:19:32.641 { 00:19:32.641 "subsystem": "vhost_scsi", 00:19:32.641 "config": [] 00:19:32.641 }, 00:19:32.641 { 00:19:32.641 "subsystem": "vhost_blk", 00:19:32.641 "config": [] 00:19:32.641 }, 00:19:32.641 { 00:19:32.641 "subsystem": "ublk", 00:19:32.641 "config": [ 00:19:32.641 { 00:19:32.641 "method": "ublk_create_target", 00:19:32.641 "params": { 00:19:32.641 "cpumask": "1" 00:19:32.641 } 00:19:32.641 }, 00:19:32.641 { 00:19:32.641 "method": "ublk_start_disk", 00:19:32.641 "params": { 00:19:32.641 "bdev_name": "malloc0", 00:19:32.641 "ublk_id": 0, 00:19:32.641 "num_queues": 1, 00:19:32.641 "queue_depth": 128 00:19:32.641 } 00:19:32.641 } 00:19:32.641 ] 00:19:32.641 }, 00:19:32.641 { 00:19:32.641 "subsystem": "nbd", 00:19:32.641 "config": [] 00:19:32.641 }, 00:19:32.641 { 00:19:32.641 "subsystem": "nvmf", 00:19:32.641 "config": [ 00:19:32.641 { 00:19:32.641 "method": "nvmf_set_config", 00:19:32.641 "params": { 00:19:32.641 "discovery_filter": "match_any", 00:19:32.641 "admin_cmd_passthru": { 00:19:32.641 "identify_ctrlr": false 00:19:32.641 } 00:19:32.641 } 00:19:32.641 }, 00:19:32.641 { 00:19:32.641 "method": "nvmf_set_max_subsystems", 00:19:32.641 "params": { 00:19:32.641 "max_subsystems": 1024 00:19:32.641 } 00:19:32.641 }, 00:19:32.641 { 00:19:32.641 "method": "nvmf_set_crdt", 00:19:32.641 "params": { 00:19:32.641 "crdt1": 0, 00:19:32.641 "crdt2": 0, 00:19:32.641 "crdt3": 0 00:19:32.641 } 00:19:32.641 } 00:19:32.641 ] 00:19:32.641 }, 00:19:32.641 { 00:19:32.641 "subsystem": "iscsi", 00:19:32.641 "config": [ 00:19:32.641 { 00:19:32.641 "method": "iscsi_set_options", 00:19:32.641 "params": { 00:19:32.641 "node_base": "iqn.2016-06.io.spdk", 00:19:32.641 "max_sessions": 128, 00:19:32.641 "max_connections_per_session": 2, 00:19:32.641 "max_queue_depth": 64, 00:19:32.641 "default_time2wait": 2, 00:19:32.641 "default_time2retain": 20, 00:19:32.641 "first_burst_length": 8192, 00:19:32.641 "immediate_data": true, 00:19:32.641 "allow_duplicated_isid": false, 00:19:32.641 "error_recovery_level": 0, 00:19:32.641 "nop_timeout": 60, 00:19:32.641 "nop_in_interval": 30, 00:19:32.641 "disable_chap": false, 00:19:32.641 "require_chap": false, 00:19:32.641 "mutual_chap": false, 00:19:32.641 "chap_group": 0, 00:19:32.641 "max_large_datain_per_connection": 64, 00:19:32.641 "max_r2t_per_connection": 4, 00:19:32.641 "pdu_pool_size": 36864, 00:19:32.641 "immediate_data_pool_size": 16384, 00:19:32.641 "data_out_pool_size": 2048 00:19:32.641 } 00:19:32.641 } 00:19:32.641 ] 00:19:32.641 } 00:19:32.641 ] 00:19:32.641 }' 00:19:32.641 [2024-07-15 14:00:57.011966] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:19:32.641 [2024-07-15 14:00:57.012122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78088 ] 00:19:32.641 [2024-07-15 14:00:57.172270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.899 [2024-07-15 14:00:57.435551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.831 [2024-07-15 14:00:58.364332] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:33.831 [2024-07-15 14:00:58.365429] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:33.831 [2024-07-15 14:00:58.372467] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:19:33.831 [2024-07-15 14:00:58.372572] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:19:33.831 [2024-07-15 14:00:58.372590] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:33.831 [2024-07-15 14:00:58.372600] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:34.089 [2024-07-15 14:00:58.381427] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:34.089 [2024-07-15 14:00:58.381474] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:34.089 [2024-07-15 14:00:58.388421] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:34.089 [2024-07-15 14:00:58.388621] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:34.089 [2024-07-15 14:00:58.405364] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:34.089 14:00:58 ublk.test_save_ublk_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:34.089 14:00:58 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # return 0 00:19:34.089 14:00:58 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:19:34.089 14:00:58 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.089 14:00:58 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:19:34.089 14:00:58 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:34.089 14:00:58 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.089 14:00:58 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:19:34.089 14:00:58 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:19:34.089 14:00:58 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 78088 00:19:34.089 14:00:58 ublk.test_save_ublk_config -- common/autotest_common.sh@948 -- # '[' -z 78088 ']' 00:19:34.089 14:00:58 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # kill -0 78088 00:19:34.089 14:00:58 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # uname 00:19:34.089 14:00:58 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:34.089 14:00:58 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78088 00:19:34.089 14:00:58 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:34.089 14:00:58 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:34.089 killing process with pid 78088 00:19:34.089 14:00:58 ublk.test_save_ublk_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78088' 00:19:34.089 14:00:58 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # kill 78088 00:19:34.089 14:00:58 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # wait 78088 00:19:35.994 [2024-07-15 14:01:00.194518] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:35.994 [2024-07-15 14:01:00.222359] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:35.994 [2024-07-15 14:01:00.222553] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:35.994 [2024-07-15 14:01:00.230342] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:35.994 [2024-07-15 14:01:00.230405] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:35.994 [2024-07-15 14:01:00.230418] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:35.994 [2024-07-15 14:01:00.230451] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:19:35.994 [2024-07-15 14:01:00.230662] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:19:36.927 14:01:01 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:19:36.927 00:19:36.927 real 0m8.787s 00:19:36.927 user 0m7.458s 00:19:36.927 sys 0m2.198s 00:19:36.927 14:01:01 ublk.test_save_ublk_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:36.927 14:01:01 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:36.927 ************************************ 00:19:36.927 END TEST test_save_ublk_config 00:19:36.927 ************************************ 00:19:37.185 14:01:01 ublk -- common/autotest_common.sh@1142 -- # return 0 00:19:37.185 14:01:01 ublk -- ublk/ublk.sh@139 -- # spdk_pid=78166 00:19:37.185 14:01:01 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:19:37.185 14:01:01 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:37.185 14:01:01 ublk -- ublk/ublk.sh@141 -- # waitforlisten 78166 00:19:37.185 14:01:01 ublk -- common/autotest_common.sh@829 -- # '[' -z 78166 ']' 00:19:37.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.185 14:01:01 ublk -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.185 14:01:01 ublk -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:37.185 14:01:01 ublk -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.185 14:01:01 ublk -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:37.185 14:01:01 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:37.185 [2024-07-15 14:01:01.625747] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:19:37.185 [2024-07-15 14:01:01.625929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78166 ] 00:19:37.444 [2024-07-15 14:01:01.810873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:37.702 [2024-07-15 14:01:02.044502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.702 [2024-07-15 14:01:02.044502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:38.267 14:01:02 ublk -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:38.267 14:01:02 ublk -- common/autotest_common.sh@862 -- # return 0 00:19:38.267 14:01:02 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:19:38.267 14:01:02 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:38.267 14:01:02 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:38.267 14:01:02 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:38.267 ************************************ 00:19:38.267 START TEST test_create_ublk 00:19:38.267 ************************************ 00:19:38.267 14:01:02 ublk.test_create_ublk -- common/autotest_common.sh@1123 -- # test_create_ublk 00:19:38.267 14:01:02 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:19:38.267 14:01:02 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.267 14:01:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:38.267 [2024-07-15 14:01:02.770329] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:38.267 [2024-07-15 14:01:02.772743] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:38.267 14:01:02 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.268 14:01:02 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:19:38.268 14:01:02 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:19:38.268 14:01:02 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.268 14:01:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:38.526 14:01:03 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.526 14:01:03 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:19:38.526 14:01:03 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:19:38.526 14:01:03 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.526 14:01:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:38.526 [2024-07-15 14:01:03.018506] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:19:38.526 [2024-07-15 14:01:03.018980] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:19:38.526 [2024-07-15 14:01:03.019001] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:38.526 [2024-07-15 14:01:03.019013] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:38.526 [2024-07-15 14:01:03.026632] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:38.526 [2024-07-15 14:01:03.026796] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:38.526 [2024-07-15 14:01:03.034352] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:38.526 [2024-07-15 14:01:03.045569] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:38.784 [2024-07-15 14:01:03.074339] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:38.784 14:01:03 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.784 14:01:03 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:19:38.784 14:01:03 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:19:38.784 14:01:03 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:19:38.784 14:01:03 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.784 14:01:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:38.784 14:01:03 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.784 14:01:03 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:19:38.784 { 00:19:38.784 "ublk_device": "/dev/ublkb0", 00:19:38.784 "id": 0, 00:19:38.784 "queue_depth": 512, 00:19:38.784 "num_queues": 4, 00:19:38.784 "bdev_name": "Malloc0" 00:19:38.784 } 00:19:38.784 ]' 00:19:38.784 14:01:03 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:19:38.784 14:01:03 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:19:38.784 14:01:03 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:19:38.784 14:01:03 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:19:38.784 14:01:03 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:19:38.784 14:01:03 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:19:38.784 14:01:03 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:19:38.784 14:01:03 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:19:38.784 14:01:03 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:19:39.041 14:01:03 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:19:39.041 14:01:03 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:19:39.041 14:01:03 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:19:39.041 14:01:03 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:19:39.041 14:01:03 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:19:39.041 14:01:03 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:19:39.041 14:01:03 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:19:39.041 14:01:03 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:19:39.041 14:01:03 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:19:39.041 14:01:03 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:19:39.041 14:01:03 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:19:39.042 14:01:03 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:19:39.042 14:01:03 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:19:39.042 fio: verification read phase will never start because write phase uses all of runtime 00:19:39.042 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:19:39.042 fio-3.35 00:19:39.042 Starting 1 process 00:19:51.270 00:19:51.270 fio_test: (groupid=0, jobs=1): err= 0: pid=78216: Mon Jul 15 14:01:13 2024 00:19:51.270 write: IOPS=11.0k, BW=43.0MiB/s (45.1MB/s)(430MiB/10001msec); 0 zone resets 00:19:51.270 clat (usec): min=57, max=5577, avg=89.15, stdev=134.91 00:19:51.270 lat (usec): min=57, max=5594, avg=90.08, stdev=134.94 00:19:51.270 clat percentiles (usec): 00:19:51.270 | 1.00th=[ 65], 5.00th=[ 75], 10.00th=[ 76], 20.00th=[ 77], 00:19:51.270 | 30.00th=[ 78], 40.00th=[ 79], 50.00th=[ 80], 60.00th=[ 81], 00:19:51.270 | 70.00th=[ 84], 80.00th=[ 88], 90.00th=[ 92], 95.00th=[ 97], 00:19:51.270 | 99.00th=[ 118], 99.50th=[ 137], 99.90th=[ 2769], 99.95th=[ 3261], 00:19:51.270 | 99.99th=[ 3752] 00:19:51.270 bw ( KiB/s): min=41696, max=45432, per=100.00%, avg=44027.37, stdev=1111.72, samples=19 00:19:51.270 iops : min=10424, max=11358, avg=11006.84, stdev=277.93, samples=19 00:19:51.270 lat (usec) : 100=96.50%, 250=3.12%, 500=0.02%, 750=0.02%, 1000=0.03% 00:19:51.270 lat (msec) : 2=0.12%, 4=0.19%, 10=0.01% 00:19:51.270 cpu : usr=2.98%, sys=8.02%, ctx=110011, majf=0, minf=797 00:19:51.270 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:51.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.270 issued rwts: total=0,109998,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.270 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:51.270 00:19:51.270 Run status group 0 (all jobs): 00:19:51.270 WRITE: bw=43.0MiB/s (45.1MB/s), 43.0MiB/s-43.0MiB/s (45.1MB/s-45.1MB/s), io=430MiB (451MB), run=10001-10001msec 00:19:51.270 00:19:51.270 Disk stats (read/write): 00:19:51.270 ublkb0: ios=0/108849, merge=0/0, ticks=0/8848, in_queue=8849, util=99.08% 00:19:51.270 14:01:13 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:19:51.270 14:01:13 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.270 14:01:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:51.270 [2024-07-15 14:01:13.613361] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:51.270 [2024-07-15 14:01:13.656385] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:51.270 [2024-07-15 14:01:13.661456] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:51.270 [2024-07-15 14:01:13.664876] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:51.270 [2024-07-15 14:01:13.665340] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:51.270 [2024-07-15 14:01:13.668395] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:51.270 14:01:13 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.270 14:01:13 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:19:51.270 14:01:13 ublk.test_create_ublk -- common/autotest_common.sh@648 -- # local es=0 00:19:51.270 14:01:13 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:19:51.270 14:01:13 ublk.test_create_ublk -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:51.270 14:01:13 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.270 14:01:13 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:51.270 14:01:13 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.270 14:01:13 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # rpc_cmd ublk_stop_disk 0 00:19:51.270 14:01:13 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.270 14:01:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:51.270 [2024-07-15 14:01:13.675450] ublk.c:1071:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:19:51.270 request: 00:19:51.270 { 00:19:51.270 "ublk_id": 0, 00:19:51.270 "method": "ublk_stop_disk", 00:19:51.270 "req_id": 1 00:19:51.270 } 00:19:51.270 Got JSON-RPC error response 00:19:51.270 response: 00:19:51.270 { 00:19:51.270 "code": -19, 00:19:51.270 "message": "No such device" 00:19:51.270 } 00:19:51.270 14:01:13 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:51.270 14:01:13 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # es=1 00:19:51.270 14:01:13 ublk.test_create_ublk -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:51.270 14:01:13 ublk.test_create_ublk -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:51.270 14:01:13 ublk.test_create_ublk -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:51.270 14:01:13 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:19:51.270 14:01:13 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.270 14:01:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:51.270 [2024-07-15 14:01:13.691427] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:19:51.270 [2024-07-15 14:01:13.699327] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:19:51.270 [2024-07-15 14:01:13.699374] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:51.270 14:01:13 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.270 14:01:13 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:51.270 14:01:13 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.270 14:01:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:51.270 14:01:14 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.270 14:01:14 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:19:51.270 14:01:14 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:19:51.270 14:01:14 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.270 14:01:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:51.270 14:01:14 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.270 14:01:14 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:19:51.270 14:01:14 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:19:51.270 14:01:14 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:19:51.270 14:01:14 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:19:51.270 14:01:14 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.270 14:01:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:51.270 14:01:14 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.270 14:01:14 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:19:51.270 14:01:14 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:19:51.270 ************************************ 00:19:51.270 END TEST test_create_ublk 00:19:51.270 ************************************ 00:19:51.270 14:01:14 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:19:51.270 00:19:51.270 real 0m11.383s 00:19:51.270 user 0m0.759s 00:19:51.270 sys 0m0.889s 00:19:51.270 14:01:14 ublk.test_create_ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:51.270 14:01:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:51.270 14:01:14 ublk -- common/autotest_common.sh@1142 -- # return 0 00:19:51.270 14:01:14 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:19:51.270 14:01:14 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:51.270 14:01:14 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:51.270 14:01:14 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:51.270 ************************************ 00:19:51.270 START TEST test_create_multi_ublk 00:19:51.270 ************************************ 00:19:51.270 14:01:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@1123 -- # test_create_multi_ublk 00:19:51.270 14:01:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:19:51.270 14:01:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.270 14:01:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:51.270 [2024-07-15 14:01:14.196336] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:51.270 [2024-07-15 14:01:14.198769] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:51.270 14:01:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.270 14:01:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:19:51.270 14:01:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:19:51.270 14:01:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:51.270 14:01:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:19:51.270 14:01:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.270 14:01:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:51.270 14:01:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.270 14:01:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:19:51.270 14:01:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:19:51.270 14:01:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.270 14:01:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:51.270 [2024-07-15 14:01:14.436500] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:19:51.270 [2024-07-15 14:01:14.437051] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:19:51.271 [2024-07-15 14:01:14.437080] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:51.271 [2024-07-15 14:01:14.437092] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:51.271 [2024-07-15 14:01:14.445532] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:51.271 [2024-07-15 14:01:14.445561] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:51.271 [2024-07-15 14:01:14.452350] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:51.271 [2024-07-15 14:01:14.453132] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:51.271 [2024-07-15 14:01:14.463416] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:51.271 14:01:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.271 14:01:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:19:51.271 14:01:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:51.271 14:01:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:19:51.271 14:01:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.271 14:01:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:51.271 14:01:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.271 14:01:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:19:51.271 14:01:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:19:51.271 14:01:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.271 14:01:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:51.271 [2024-07-15 14:01:14.726566] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:19:51.271 [2024-07-15 14:01:14.727072] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:19:51.271 [2024-07-15 14:01:14.727096] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:19:51.271 [2024-07-15 14:01:14.727110] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:19:51.271 [2024-07-15 14:01:14.735603] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:51.271 [2024-07-15 14:01:14.735654] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:51.271 [2024-07-15 14:01:14.742356] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:51.271 [2024-07-15 14:01:14.743096] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:19:51.271 [2024-07-15 14:01:14.751382] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:19:51.271 14:01:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.271 14:01:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:19:51.271 14:01:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:51.271 14:01:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:19:51.271 14:01:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.271 14:01:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:51.271 [2024-07-15 14:01:15.006516] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:19:51.271 [2024-07-15 14:01:15.006985] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:19:51.271 [2024-07-15 14:01:15.007007] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:19:51.271 [2024-07-15 14:01:15.007017] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:19:51.271 [2024-07-15 14:01:15.014412] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:51.271 [2024-07-15 14:01:15.014569] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:51.271 [2024-07-15 14:01:15.025396] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:51.271 [2024-07-15 14:01:15.026217] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:19:51.271 [2024-07-15 14:01:15.035387] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:51.271 [2024-07-15 14:01:15.298481] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:19:51.271 [2024-07-15 14:01:15.298987] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:19:51.271 [2024-07-15 14:01:15.299014] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:19:51.271 [2024-07-15 14:01:15.299027] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:19:51.271 [2024-07-15 14:01:15.306367] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:51.271 [2024-07-15 14:01:15.306403] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:51.271 [2024-07-15 14:01:15.314365] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:51.271 [2024-07-15 14:01:15.315159] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:19:51.271 [2024-07-15 14:01:15.319660] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:19:51.271 { 00:19:51.271 "ublk_device": "/dev/ublkb0", 00:19:51.271 "id": 0, 00:19:51.271 "queue_depth": 512, 00:19:51.271 "num_queues": 4, 00:19:51.271 "bdev_name": "Malloc0" 00:19:51.271 }, 00:19:51.271 { 00:19:51.271 "ublk_device": "/dev/ublkb1", 00:19:51.271 "id": 1, 00:19:51.271 "queue_depth": 512, 00:19:51.271 "num_queues": 4, 00:19:51.271 "bdev_name": "Malloc1" 00:19:51.271 }, 00:19:51.271 { 00:19:51.271 "ublk_device": "/dev/ublkb2", 00:19:51.271 "id": 2, 00:19:51.271 "queue_depth": 512, 00:19:51.271 "num_queues": 4, 00:19:51.271 "bdev_name": "Malloc2" 00:19:51.271 }, 00:19:51.271 { 00:19:51.271 "ublk_device": "/dev/ublkb3", 00:19:51.271 "id": 3, 00:19:51.271 "queue_depth": 512, 00:19:51.271 "num_queues": 4, 00:19:51.271 "bdev_name": "Malloc3" 00:19:51.271 } 00:19:51.271 ]' 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:51.271 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:19:51.529 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:51.529 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:19:51.529 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:19:51.529 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:51.529 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:19:51.529 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:19:51.529 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:19:51.529 14:01:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:19:51.529 14:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:19:51.529 14:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:51.529 14:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:19:51.787 14:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:51.787 14:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:19:51.787 14:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:19:51.787 14:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:51.787 14:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:19:51.787 14:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:19:51.787 14:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:19:51.787 14:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:19:51.787 14:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:19:51.787 14:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:51.787 14:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:19:52.045 14:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:52.045 14:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:19:52.045 14:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:19:52.045 14:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:19:52.045 14:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:19:52.045 14:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:52.045 14:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:19:52.045 14:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.045 14:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:52.045 [2024-07-15 14:01:16.416642] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:52.045 [2024-07-15 14:01:16.456721] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:52.045 [2024-07-15 14:01:16.460637] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:52.045 [2024-07-15 14:01:16.466332] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:52.045 [2024-07-15 14:01:16.466723] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:52.045 [2024-07-15 14:01:16.466747] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:52.045 14:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.045 14:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:52.045 14:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:19:52.045 14:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.045 14:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:52.045 [2024-07-15 14:01:16.472775] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:19:52.045 [2024-07-15 14:01:16.501780] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:52.045 [2024-07-15 14:01:16.507320] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:19:52.045 [2024-07-15 14:01:16.518484] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:52.045 [2024-07-15 14:01:16.518821] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:19:52.045 [2024-07-15 14:01:16.518843] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:19:52.045 14:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.045 14:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:52.045 14:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:19:52.045 14:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.045 14:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:52.045 [2024-07-15 14:01:16.534446] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:19:52.045 [2024-07-15 14:01:16.571345] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:52.045 [2024-07-15 14:01:16.572630] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:19:52.045 [2024-07-15 14:01:16.574701] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:52.045 [2024-07-15 14:01:16.575041] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:19:52.045 [2024-07-15 14:01:16.575061] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:19:52.045 14:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.045 14:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:52.045 14:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:19:52.045 14:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.045 14:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:52.045 [2024-07-15 14:01:16.585540] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:19:52.302 [2024-07-15 14:01:16.626839] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:52.302 [2024-07-15 14:01:16.628318] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:19:52.302 [2024-07-15 14:01:16.632459] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:52.302 [2024-07-15 14:01:16.632809] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:19:52.302 [2024-07-15 14:01:16.632836] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:19:52.302 14:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.302 14:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:19:52.560 [2024-07-15 14:01:16.921457] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:19:52.560 [2024-07-15 14:01:16.927432] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:19:52.560 [2024-07-15 14:01:16.927484] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:52.560 14:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:19:52.560 14:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:52.560 14:01:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:52.560 14:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.560 14:01:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:52.817 14:01:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.817 14:01:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:52.817 14:01:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:52.817 14:01:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.817 14:01:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:53.076 14:01:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.076 14:01:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:53.076 14:01:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:19:53.076 14:01:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.076 14:01:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:53.333 14:01:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.333 14:01:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:53.333 14:01:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:19:53.333 14:01:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.334 14:01:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:53.899 14:01:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.899 14:01:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:19:53.899 14:01:18 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:19:53.899 14:01:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.899 14:01:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:53.899 14:01:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.899 14:01:18 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:19:53.899 14:01:18 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:19:53.899 14:01:18 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:19:53.899 14:01:18 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:19:53.899 14:01:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.899 14:01:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:53.899 14:01:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.899 14:01:18 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:19:53.899 14:01:18 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:19:53.899 ************************************ 00:19:53.899 END TEST test_create_multi_ublk 00:19:53.899 ************************************ 00:19:53.899 14:01:18 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:19:53.899 00:19:53.899 real 0m4.103s 00:19:53.899 user 0m1.358s 00:19:53.899 sys 0m0.152s 00:19:53.899 14:01:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:53.899 14:01:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:53.899 14:01:18 ublk -- common/autotest_common.sh@1142 -- # return 0 00:19:53.899 14:01:18 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:19:53.899 14:01:18 ublk -- ublk/ublk.sh@147 -- # cleanup 00:19:53.899 14:01:18 ublk -- ublk/ublk.sh@130 -- # killprocess 78166 00:19:53.899 14:01:18 ublk -- common/autotest_common.sh@948 -- # '[' -z 78166 ']' 00:19:53.899 14:01:18 ublk -- common/autotest_common.sh@952 -- # kill -0 78166 00:19:53.899 14:01:18 ublk -- common/autotest_common.sh@953 -- # uname 00:19:53.899 14:01:18 ublk -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:53.899 14:01:18 ublk -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78166 00:19:53.899 14:01:18 ublk -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:53.899 14:01:18 ublk -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:53.899 14:01:18 ublk -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78166' 00:19:53.899 killing process with pid 78166 00:19:53.899 14:01:18 ublk -- common/autotest_common.sh@967 -- # kill 78166 00:19:53.899 14:01:18 ublk -- common/autotest_common.sh@972 -- # wait 78166 00:19:54.831 [2024-07-15 14:01:19.307844] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:19:54.831 [2024-07-15 14:01:19.307913] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:19:56.205 00:19:56.205 real 0m27.906s 00:19:56.205 user 0m42.179s 00:19:56.205 sys 0m8.028s 00:19:56.205 14:01:20 ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:56.205 14:01:20 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:56.205 ************************************ 00:19:56.205 END TEST ublk 00:19:56.205 ************************************ 00:19:56.205 14:01:20 -- common/autotest_common.sh@1142 -- # return 0 00:19:56.205 14:01:20 -- spdk/autotest.sh@252 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:19:56.205 14:01:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:56.205 14:01:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:56.205 14:01:20 -- common/autotest_common.sh@10 -- # set +x 00:19:56.205 ************************************ 00:19:56.205 START TEST ublk_recovery 00:19:56.205 ************************************ 00:19:56.205 14:01:20 ublk_recovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:19:56.205 * Looking for test storage... 00:19:56.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:19:56.205 14:01:20 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:19:56.205 14:01:20 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:19:56.205 14:01:20 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:19:56.205 14:01:20 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:19:56.205 14:01:20 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:19:56.205 14:01:20 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:19:56.205 14:01:20 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:19:56.206 14:01:20 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:19:56.206 14:01:20 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:19:56.206 14:01:20 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:19:56.206 14:01:20 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=78550 00:19:56.206 14:01:20 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:19:56.206 14:01:20 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:56.206 14:01:20 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 78550 00:19:56.206 14:01:20 ublk_recovery -- common/autotest_common.sh@829 -- # '[' -z 78550 ']' 00:19:56.206 14:01:20 ublk_recovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.206 14:01:20 ublk_recovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:56.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.206 14:01:20 ublk_recovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.206 14:01:20 ublk_recovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:56.206 14:01:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:56.464 [2024-07-15 14:01:20.771843] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:19:56.464 [2024-07-15 14:01:20.772012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78550 ] 00:19:56.464 [2024-07-15 14:01:20.934586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:56.721 [2024-07-15 14:01:21.123504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.721 [2024-07-15 14:01:21.123505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.651 14:01:21 ublk_recovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:57.651 14:01:21 ublk_recovery -- common/autotest_common.sh@862 -- # return 0 00:19:57.651 14:01:21 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:19:57.651 14:01:21 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.651 14:01:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:57.651 [2024-07-15 14:01:21.858346] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:57.651 [2024-07-15 14:01:21.860866] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:57.651 14:01:21 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.651 14:01:21 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:19:57.651 14:01:21 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.651 14:01:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:57.651 malloc0 00:19:57.651 14:01:21 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.651 14:01:21 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:19:57.651 14:01:21 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.651 14:01:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:57.651 [2024-07-15 14:01:21.995113] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:19:57.651 [2024-07-15 14:01:21.995276] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:19:57.651 [2024-07-15 14:01:21.995295] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:19:57.651 [2024-07-15 14:01:21.995325] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:19:57.651 [2024-07-15 14:01:22.002562] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:57.651 [2024-07-15 14:01:22.002632] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:57.651 [2024-07-15 14:01:22.010409] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:57.651 [2024-07-15 14:01:22.010663] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:19:57.651 [2024-07-15 14:01:22.026364] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:19:57.651 1 00:19:57.651 14:01:22 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.651 14:01:22 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:19:58.644 14:01:23 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=78591 00:19:58.644 14:01:23 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:19:58.644 14:01:23 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:19:58.644 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:58.644 fio-3.35 00:19:58.644 Starting 1 process 00:20:03.906 14:01:28 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 78550 00:20:03.906 14:01:28 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:20:09.177 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 78550 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:20:09.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.177 14:01:33 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=78691 00:20:09.177 14:01:33 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:20:09.177 14:01:33 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:09.177 14:01:33 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 78691 00:20:09.177 14:01:33 ublk_recovery -- common/autotest_common.sh@829 -- # '[' -z 78691 ']' 00:20:09.177 14:01:33 ublk_recovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.177 14:01:33 ublk_recovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:09.177 14:01:33 ublk_recovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.177 14:01:33 ublk_recovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:09.177 14:01:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:09.177 [2024-07-15 14:01:33.144727] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:20:09.177 [2024-07-15 14:01:33.144905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78691 ] 00:20:09.177 [2024-07-15 14:01:33.328935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:09.177 [2024-07-15 14:01:33.555019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.177 [2024-07-15 14:01:33.555027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.742 14:01:34 ublk_recovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:09.742 14:01:34 ublk_recovery -- common/autotest_common.sh@862 -- # return 0 00:20:09.742 14:01:34 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:20:09.742 14:01:34 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.743 14:01:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:09.743 [2024-07-15 14:01:34.283332] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:09.743 [2024-07-15 14:01:34.285863] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:09.743 14:01:34 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.743 14:01:34 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:20:09.743 14:01:34 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.743 14:01:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:09.999 malloc0 00:20:09.999 14:01:34 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.999 14:01:34 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:20:09.999 14:01:34 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.999 14:01:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:09.999 [2024-07-15 14:01:34.419491] ublk.c:2095:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:20:09.999 [2024-07-15 14:01:34.419553] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:20:09.999 [2024-07-15 14:01:34.419567] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:20:09.999 [2024-07-15 14:01:34.427382] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:20:09.999 [2024-07-15 14:01:34.427410] ublk.c:2024:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:20:09.999 [2024-07-15 14:01:34.427511] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:20:09.999 1 00:20:09.999 14:01:34 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.000 14:01:34 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 78591 00:20:10.000 [2024-07-15 14:01:34.435335] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:20:10.000 [2024-07-15 14:01:34.443001] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:20:10.000 [2024-07-15 14:01:34.450357] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:20:10.000 [2024-07-15 14:01:34.450400] ublk.c: 378:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:21:06.210 00:21:06.210 fio_test: (groupid=0, jobs=1): err= 0: pid=78594: Mon Jul 15 14:02:23 2024 00:21:06.210 read: IOPS=17.5k, BW=68.4MiB/s (71.7MB/s)(4104MiB/60003msec) 00:21:06.210 slat (nsec): min=1904, max=890204, avg=6968.04, stdev=2863.35 00:21:06.210 clat (usec): min=1100, max=6422.0k, avg=3641.54, stdev=53125.69 00:21:06.210 lat (usec): min=1108, max=6422.0k, avg=3648.50, stdev=53125.69 00:21:06.210 clat percentiles (usec): 00:21:06.210 | 1.00th=[ 2573], 5.00th=[ 2769], 10.00th=[ 2835], 20.00th=[ 2900], 00:21:06.210 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3064], 00:21:06.210 | 70.00th=[ 3130], 80.00th=[ 3326], 90.00th=[ 3818], 95.00th=[ 4228], 00:21:06.210 | 99.00th=[ 5997], 99.50th=[ 6915], 99.90th=[ 8848], 99.95th=[10814], 00:21:06.210 | 99.99th=[13960] 00:21:06.210 bw ( KiB/s): min=19912, max=84360, per=100.00%, avg=77881.27, stdev=8495.26, samples=107 00:21:06.210 iops : min= 4978, max=21090, avg=19470.30, stdev=2123.81, samples=107 00:21:06.210 write: IOPS=17.5k, BW=68.3MiB/s (71.6MB/s)(4100MiB/60003msec); 0 zone resets 00:21:06.210 slat (usec): min=2, max=872, avg= 7.00, stdev= 2.89 00:21:06.210 clat (usec): min=1043, max=6422.2k, avg=3656.49, stdev=46877.58 00:21:06.210 lat (usec): min=1060, max=6422.2k, avg=3663.49, stdev=46877.58 00:21:06.210 clat percentiles (usec): 00:21:06.210 | 1.00th=[ 2606], 5.00th=[ 2900], 10.00th=[ 2966], 20.00th=[ 3032], 00:21:06.210 | 30.00th=[ 3064], 40.00th=[ 3097], 50.00th=[ 3130], 60.00th=[ 3195], 00:21:06.210 | 70.00th=[ 3261], 80.00th=[ 3458], 90.00th=[ 3916], 95.00th=[ 4293], 00:21:06.210 | 99.00th=[ 5997], 99.50th=[ 7046], 99.90th=[ 8979], 99.95th=[10814], 00:21:06.210 | 99.99th=[14091] 00:21:06.210 bw ( KiB/s): min=20168, max=84032, per=100.00%, avg=77786.68, stdev=8410.63, samples=107 00:21:06.210 iops : min= 5042, max=21008, avg=19446.65, stdev=2102.65, samples=107 00:21:06.210 lat (msec) : 2=0.05%, 4=92.01%, 10=7.88%, 20=0.06%, >=2000=0.01% 00:21:06.210 cpu : usr=9.92%, sys=23.67%, ctx=71086, majf=0, minf=13 00:21:06.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:21:06.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:06.210 issued rwts: total=1050660,1049512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.210 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.210 00:21:06.210 Run status group 0 (all jobs): 00:21:06.210 READ: bw=68.4MiB/s (71.7MB/s), 68.4MiB/s-68.4MiB/s (71.7MB/s-71.7MB/s), io=4104MiB (4304MB), run=60003-60003msec 00:21:06.210 WRITE: bw=68.3MiB/s (71.6MB/s), 68.3MiB/s-68.3MiB/s (71.6MB/s-71.6MB/s), io=4100MiB (4299MB), run=60003-60003msec 00:21:06.210 00:21:06.210 Disk stats (read/write): 00:21:06.210 ublkb1: ios=1048404/1047106, merge=0/0, ticks=3724135/3608768, in_queue=7332903, util=99.94% 00:21:06.210 14:02:23 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:21:06.210 14:02:23 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.210 14:02:23 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.210 [2024-07-15 14:02:23.295514] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:21:06.210 [2024-07-15 14:02:23.329486] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:06.210 [2024-07-15 14:02:23.329771] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:21:06.210 [2024-07-15 14:02:23.337409] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:06.210 [2024-07-15 14:02:23.337549] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:21:06.210 [2024-07-15 14:02:23.337566] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:21:06.210 14:02:23 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.210 14:02:23 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:21:06.210 14:02:23 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.210 14:02:23 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.210 [2024-07-15 14:02:23.353459] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:21:06.210 [2024-07-15 14:02:23.359544] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:21:06.210 [2024-07-15 14:02:23.359596] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:21:06.210 14:02:23 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.210 14:02:23 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:21:06.210 14:02:23 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:21:06.210 14:02:23 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 78691 00:21:06.210 14:02:23 ublk_recovery -- common/autotest_common.sh@948 -- # '[' -z 78691 ']' 00:21:06.210 14:02:23 ublk_recovery -- common/autotest_common.sh@952 -- # kill -0 78691 00:21:06.210 14:02:23 ublk_recovery -- common/autotest_common.sh@953 -- # uname 00:21:06.210 14:02:23 ublk_recovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:06.210 14:02:23 ublk_recovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78691 00:21:06.210 killing process with pid 78691 00:21:06.210 14:02:23 ublk_recovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:06.210 14:02:23 ublk_recovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:06.210 14:02:23 ublk_recovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78691' 00:21:06.210 14:02:23 ublk_recovery -- common/autotest_common.sh@967 -- # kill 78691 00:21:06.210 14:02:23 ublk_recovery -- common/autotest_common.sh@972 -- # wait 78691 00:21:06.210 [2024-07-15 14:02:24.351531] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:21:06.210 [2024-07-15 14:02:24.351608] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:21:06.210 ************************************ 00:21:06.210 END TEST ublk_recovery 00:21:06.210 ************************************ 00:21:06.210 00:21:06.210 real 1m5.110s 00:21:06.210 user 1m47.344s 00:21:06.210 sys 0m31.858s 00:21:06.210 14:02:25 ublk_recovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:06.210 14:02:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.210 14:02:25 -- common/autotest_common.sh@1142 -- # return 0 00:21:06.210 14:02:25 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:21:06.210 14:02:25 -- spdk/autotest.sh@260 -- # timing_exit lib 00:21:06.210 14:02:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:06.210 14:02:25 -- common/autotest_common.sh@10 -- # set +x 00:21:06.210 14:02:25 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:21:06.210 14:02:25 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:21:06.210 14:02:25 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:21:06.210 14:02:25 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:21:06.210 14:02:25 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:21:06.210 14:02:25 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:21:06.210 14:02:25 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:21:06.210 14:02:25 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:21:06.210 14:02:25 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:21:06.210 14:02:25 -- spdk/autotest.sh@339 -- # '[' 1 -eq 1 ']' 00:21:06.210 14:02:25 -- spdk/autotest.sh@340 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:21:06.210 14:02:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:06.210 14:02:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:06.210 14:02:25 -- common/autotest_common.sh@10 -- # set +x 00:21:06.210 ************************************ 00:21:06.210 START TEST ftl 00:21:06.210 ************************************ 00:21:06.210 14:02:25 ftl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:21:06.210 * Looking for test storage... 00:21:06.210 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:06.210 14:02:25 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:06.210 14:02:25 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:21:06.210 14:02:25 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:06.210 14:02:25 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:06.211 14:02:25 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:06.211 14:02:25 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:06.211 14:02:25 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:06.211 14:02:25 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:06.211 14:02:25 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:06.211 14:02:25 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:06.211 14:02:25 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:06.211 14:02:25 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:06.211 14:02:25 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:06.211 14:02:25 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:06.211 14:02:25 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:06.211 14:02:25 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:06.211 14:02:25 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:06.211 14:02:25 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:06.211 14:02:25 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:06.211 14:02:25 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:06.211 14:02:25 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:06.211 14:02:25 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:06.211 14:02:25 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:06.211 14:02:25 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:06.211 14:02:25 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:06.211 14:02:25 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:06.211 14:02:25 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:06.211 14:02:25 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:06.211 14:02:25 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:06.211 14:02:25 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:06.211 14:02:25 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:21:06.211 14:02:25 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:21:06.211 14:02:25 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:21:06.211 14:02:25 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:21:06.211 14:02:25 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:06.211 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:06.211 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:06.211 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:06.211 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:06.211 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:06.211 14:02:26 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=79468 00:21:06.211 14:02:26 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:21:06.211 14:02:26 ftl -- ftl/ftl.sh@38 -- # waitforlisten 79468 00:21:06.211 14:02:26 ftl -- common/autotest_common.sh@829 -- # '[' -z 79468 ']' 00:21:06.211 14:02:26 ftl -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.211 14:02:26 ftl -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:06.211 14:02:26 ftl -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.211 14:02:26 ftl -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:06.211 14:02:26 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:06.211 [2024-07-15 14:02:26.404837] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:21:06.211 [2024-07-15 14:02:26.405252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79468 ] 00:21:06.211 [2024-07-15 14:02:26.578211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.211 [2024-07-15 14:02:26.766720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.211 14:02:27 ftl -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:06.211 14:02:27 ftl -- common/autotest_common.sh@862 -- # return 0 00:21:06.211 14:02:27 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:21:06.211 14:02:27 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:21:06.211 14:02:28 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:21:06.211 14:02:28 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:06.211 14:02:29 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:21:06.211 14:02:29 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:21:06.211 14:02:29 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:21:06.211 14:02:29 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:21:06.211 14:02:29 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:21:06.211 14:02:29 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:21:06.211 14:02:29 ftl -- ftl/ftl.sh@50 -- # break 00:21:06.211 14:02:29 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:21:06.211 14:02:29 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:21:06.211 14:02:29 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:21:06.211 14:02:29 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:21:06.211 14:02:29 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:21:06.211 14:02:29 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:21:06.211 14:02:29 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:21:06.211 14:02:29 ftl -- ftl/ftl.sh@63 -- # break 00:21:06.211 14:02:29 ftl -- ftl/ftl.sh@66 -- # killprocess 79468 00:21:06.211 14:02:29 ftl -- common/autotest_common.sh@948 -- # '[' -z 79468 ']' 00:21:06.211 14:02:29 ftl -- common/autotest_common.sh@952 -- # kill -0 79468 00:21:06.211 14:02:29 ftl -- common/autotest_common.sh@953 -- # uname 00:21:06.211 14:02:29 ftl -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:06.211 14:02:29 ftl -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79468 00:21:06.211 killing process with pid 79468 00:21:06.211 14:02:29 ftl -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:06.211 14:02:29 ftl -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:06.211 14:02:29 ftl -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79468' 00:21:06.211 14:02:29 ftl -- common/autotest_common.sh@967 -- # kill 79468 00:21:06.211 14:02:29 ftl -- common/autotest_common.sh@972 -- # wait 79468 00:21:07.584 14:02:31 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:21:07.584 14:02:31 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:21:07.584 14:02:31 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:07.584 14:02:31 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:07.584 14:02:31 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:07.584 ************************************ 00:21:07.584 START TEST ftl_fio_basic 00:21:07.584 ************************************ 00:21:07.584 14:02:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:21:07.584 * Looking for test storage... 00:21:07.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:21:07.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=79611 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 79611 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- common/autotest_common.sh@829 -- # '[' -z 79611 ']' 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:07.584 14:02:32 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.585 14:02:32 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:07.585 14:02:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:07.585 14:02:32 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:21:07.843 [2024-07-15 14:02:32.210033] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:21:07.843 [2024-07-15 14:02:32.210346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79611 ] 00:21:08.101 [2024-07-15 14:02:32.399504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:08.101 [2024-07-15 14:02:32.632611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.101 [2024-07-15 14:02:32.632749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.101 [2024-07-15 14:02:32.632757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:09.032 14:02:33 ftl.ftl_fio_basic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:09.032 14:02:33 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # return 0 00:21:09.032 14:02:33 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:09.032 14:02:33 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:21:09.032 14:02:33 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:09.032 14:02:33 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:21:09.032 14:02:33 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:21:09.032 14:02:33 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:09.289 14:02:33 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:09.289 14:02:33 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:21:09.289 14:02:33 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:09.289 14:02:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:21:09.289 14:02:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:09.289 14:02:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:21:09.289 14:02:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:21:09.289 14:02:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:09.547 14:02:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:09.547 { 00:21:09.547 "name": "nvme0n1", 00:21:09.547 "aliases": [ 00:21:09.547 "2c66f6c1-9093-4299-b453-1dfdb9d1c333" 00:21:09.547 ], 00:21:09.547 "product_name": "NVMe disk", 00:21:09.547 "block_size": 4096, 00:21:09.547 "num_blocks": 1310720, 00:21:09.547 "uuid": "2c66f6c1-9093-4299-b453-1dfdb9d1c333", 00:21:09.547 "assigned_rate_limits": { 00:21:09.547 "rw_ios_per_sec": 0, 00:21:09.547 "rw_mbytes_per_sec": 0, 00:21:09.547 "r_mbytes_per_sec": 0, 00:21:09.547 "w_mbytes_per_sec": 0 00:21:09.547 }, 00:21:09.547 "claimed": false, 00:21:09.547 "zoned": false, 00:21:09.547 "supported_io_types": { 00:21:09.547 "read": true, 00:21:09.547 "write": true, 00:21:09.547 "unmap": true, 00:21:09.547 "flush": true, 00:21:09.547 "reset": true, 00:21:09.547 "nvme_admin": true, 00:21:09.547 "nvme_io": true, 00:21:09.547 "nvme_io_md": false, 00:21:09.547 "write_zeroes": true, 00:21:09.547 "zcopy": false, 00:21:09.547 "get_zone_info": false, 00:21:09.547 "zone_management": false, 00:21:09.547 "zone_append": false, 00:21:09.547 "compare": true, 00:21:09.547 "compare_and_write": false, 00:21:09.547 "abort": true, 00:21:09.547 "seek_hole": false, 00:21:09.547 "seek_data": false, 00:21:09.547 "copy": true, 00:21:09.547 "nvme_iov_md": false 00:21:09.547 }, 00:21:09.547 "driver_specific": { 00:21:09.547 "nvme": [ 00:21:09.547 { 00:21:09.547 "pci_address": "0000:00:11.0", 00:21:09.547 "trid": { 00:21:09.547 "trtype": "PCIe", 00:21:09.547 "traddr": "0000:00:11.0" 00:21:09.547 }, 00:21:09.547 "ctrlr_data": { 00:21:09.547 "cntlid": 0, 00:21:09.547 "vendor_id": "0x1b36", 00:21:09.547 "model_number": "QEMU NVMe Ctrl", 00:21:09.547 "serial_number": "12341", 00:21:09.547 "firmware_revision": "8.0.0", 00:21:09.547 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:09.547 "oacs": { 00:21:09.547 "security": 0, 00:21:09.547 "format": 1, 00:21:09.547 "firmware": 0, 00:21:09.547 "ns_manage": 1 00:21:09.547 }, 00:21:09.547 "multi_ctrlr": false, 00:21:09.547 "ana_reporting": false 00:21:09.547 }, 00:21:09.547 "vs": { 00:21:09.547 "nvme_version": "1.4" 00:21:09.547 }, 00:21:09.547 "ns_data": { 00:21:09.547 "id": 1, 00:21:09.547 "can_share": false 00:21:09.547 } 00:21:09.547 } 00:21:09.547 ], 00:21:09.547 "mp_policy": "active_passive" 00:21:09.547 } 00:21:09.547 } 00:21:09.547 ]' 00:21:09.547 14:02:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:09.547 14:02:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:21:09.547 14:02:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:09.804 14:02:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:21:09.804 14:02:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:21:09.804 14:02:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:21:09.804 14:02:34 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:21:09.804 14:02:34 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:09.804 14:02:34 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:21:09.804 14:02:34 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:09.804 14:02:34 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:10.062 14:02:34 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:21:10.062 14:02:34 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:10.321 14:02:34 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=82e1b0ec-0f06-4c43-9c15-c4fe7cf72adb 00:21:10.321 14:02:34 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 82e1b0ec-0f06-4c43-9c15-c4fe7cf72adb 00:21:10.580 14:02:35 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=b2edaeb3-6860-4891-bc46-9cef96adf98e 00:21:10.838 14:02:35 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b2edaeb3-6860-4891-bc46-9cef96adf98e 00:21:10.838 14:02:35 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:21:10.838 14:02:35 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:10.838 14:02:35 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=b2edaeb3-6860-4891-bc46-9cef96adf98e 00:21:10.838 14:02:35 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:21:10.838 14:02:35 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size b2edaeb3-6860-4891-bc46-9cef96adf98e 00:21:10.838 14:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=b2edaeb3-6860-4891-bc46-9cef96adf98e 00:21:10.838 14:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:10.838 14:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:21:10.838 14:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:21:10.838 14:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b2edaeb3-6860-4891-bc46-9cef96adf98e 00:21:11.101 14:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:11.101 { 00:21:11.101 "name": "b2edaeb3-6860-4891-bc46-9cef96adf98e", 00:21:11.101 "aliases": [ 00:21:11.101 "lvs/nvme0n1p0" 00:21:11.101 ], 00:21:11.102 "product_name": "Logical Volume", 00:21:11.102 "block_size": 4096, 00:21:11.102 "num_blocks": 26476544, 00:21:11.102 "uuid": "b2edaeb3-6860-4891-bc46-9cef96adf98e", 00:21:11.102 "assigned_rate_limits": { 00:21:11.102 "rw_ios_per_sec": 0, 00:21:11.102 "rw_mbytes_per_sec": 0, 00:21:11.102 "r_mbytes_per_sec": 0, 00:21:11.102 "w_mbytes_per_sec": 0 00:21:11.102 }, 00:21:11.102 "claimed": false, 00:21:11.102 "zoned": false, 00:21:11.102 "supported_io_types": { 00:21:11.102 "read": true, 00:21:11.102 "write": true, 00:21:11.102 "unmap": true, 00:21:11.102 "flush": false, 00:21:11.102 "reset": true, 00:21:11.102 "nvme_admin": false, 00:21:11.102 "nvme_io": false, 00:21:11.102 "nvme_io_md": false, 00:21:11.102 "write_zeroes": true, 00:21:11.102 "zcopy": false, 00:21:11.102 "get_zone_info": false, 00:21:11.102 "zone_management": false, 00:21:11.102 "zone_append": false, 00:21:11.102 "compare": false, 00:21:11.102 "compare_and_write": false, 00:21:11.102 "abort": false, 00:21:11.102 "seek_hole": true, 00:21:11.102 "seek_data": true, 00:21:11.102 "copy": false, 00:21:11.102 "nvme_iov_md": false 00:21:11.102 }, 00:21:11.102 "driver_specific": { 00:21:11.102 "lvol": { 00:21:11.102 "lvol_store_uuid": "82e1b0ec-0f06-4c43-9c15-c4fe7cf72adb", 00:21:11.102 "base_bdev": "nvme0n1", 00:21:11.102 "thin_provision": true, 00:21:11.102 "num_allocated_clusters": 0, 00:21:11.102 "snapshot": false, 00:21:11.102 "clone": false, 00:21:11.102 "esnap_clone": false 00:21:11.102 } 00:21:11.102 } 00:21:11.102 } 00:21:11.102 ]' 00:21:11.102 14:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:11.102 14:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:21:11.102 14:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:11.102 14:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:11.102 14:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:11.102 14:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:21:11.102 14:02:35 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:21:11.102 14:02:35 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:21:11.102 14:02:35 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:11.372 14:02:35 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:11.372 14:02:35 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:11.372 14:02:35 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size b2edaeb3-6860-4891-bc46-9cef96adf98e 00:21:11.372 14:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=b2edaeb3-6860-4891-bc46-9cef96adf98e 00:21:11.372 14:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:11.372 14:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:21:11.372 14:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:21:11.372 14:02:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b2edaeb3-6860-4891-bc46-9cef96adf98e 00:21:11.630 14:02:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:11.630 { 00:21:11.630 "name": "b2edaeb3-6860-4891-bc46-9cef96adf98e", 00:21:11.630 "aliases": [ 00:21:11.630 "lvs/nvme0n1p0" 00:21:11.630 ], 00:21:11.630 "product_name": "Logical Volume", 00:21:11.630 "block_size": 4096, 00:21:11.630 "num_blocks": 26476544, 00:21:11.630 "uuid": "b2edaeb3-6860-4891-bc46-9cef96adf98e", 00:21:11.630 "assigned_rate_limits": { 00:21:11.630 "rw_ios_per_sec": 0, 00:21:11.630 "rw_mbytes_per_sec": 0, 00:21:11.630 "r_mbytes_per_sec": 0, 00:21:11.630 "w_mbytes_per_sec": 0 00:21:11.630 }, 00:21:11.630 "claimed": false, 00:21:11.630 "zoned": false, 00:21:11.630 "supported_io_types": { 00:21:11.630 "read": true, 00:21:11.630 "write": true, 00:21:11.630 "unmap": true, 00:21:11.630 "flush": false, 00:21:11.630 "reset": true, 00:21:11.630 "nvme_admin": false, 00:21:11.630 "nvme_io": false, 00:21:11.630 "nvme_io_md": false, 00:21:11.630 "write_zeroes": true, 00:21:11.630 "zcopy": false, 00:21:11.630 "get_zone_info": false, 00:21:11.630 "zone_management": false, 00:21:11.630 "zone_append": false, 00:21:11.630 "compare": false, 00:21:11.630 "compare_and_write": false, 00:21:11.630 "abort": false, 00:21:11.630 "seek_hole": true, 00:21:11.630 "seek_data": true, 00:21:11.630 "copy": false, 00:21:11.630 "nvme_iov_md": false 00:21:11.630 }, 00:21:11.630 "driver_specific": { 00:21:11.630 "lvol": { 00:21:11.630 "lvol_store_uuid": "82e1b0ec-0f06-4c43-9c15-c4fe7cf72adb", 00:21:11.630 "base_bdev": "nvme0n1", 00:21:11.630 "thin_provision": true, 00:21:11.630 "num_allocated_clusters": 0, 00:21:11.630 "snapshot": false, 00:21:11.630 "clone": false, 00:21:11.630 "esnap_clone": false 00:21:11.630 } 00:21:11.630 } 00:21:11.630 } 00:21:11.630 ]' 00:21:11.630 14:02:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:11.630 14:02:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:21:11.889 14:02:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:11.889 14:02:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:11.889 14:02:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:11.889 14:02:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:21:11.889 14:02:36 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:21:11.889 14:02:36 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:12.148 14:02:36 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:21:12.148 14:02:36 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:21:12.148 14:02:36 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:21:12.148 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:21:12.148 14:02:36 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size b2edaeb3-6860-4891-bc46-9cef96adf98e 00:21:12.148 14:02:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=b2edaeb3-6860-4891-bc46-9cef96adf98e 00:21:12.148 14:02:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:12.148 14:02:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:21:12.148 14:02:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:21:12.148 14:02:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b2edaeb3-6860-4891-bc46-9cef96adf98e 00:21:12.406 14:02:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:12.406 { 00:21:12.406 "name": "b2edaeb3-6860-4891-bc46-9cef96adf98e", 00:21:12.406 "aliases": [ 00:21:12.406 "lvs/nvme0n1p0" 00:21:12.406 ], 00:21:12.406 "product_name": "Logical Volume", 00:21:12.406 "block_size": 4096, 00:21:12.406 "num_blocks": 26476544, 00:21:12.406 "uuid": "b2edaeb3-6860-4891-bc46-9cef96adf98e", 00:21:12.406 "assigned_rate_limits": { 00:21:12.406 "rw_ios_per_sec": 0, 00:21:12.406 "rw_mbytes_per_sec": 0, 00:21:12.406 "r_mbytes_per_sec": 0, 00:21:12.406 "w_mbytes_per_sec": 0 00:21:12.406 }, 00:21:12.406 "claimed": false, 00:21:12.406 "zoned": false, 00:21:12.406 "supported_io_types": { 00:21:12.406 "read": true, 00:21:12.406 "write": true, 00:21:12.406 "unmap": true, 00:21:12.406 "flush": false, 00:21:12.406 "reset": true, 00:21:12.406 "nvme_admin": false, 00:21:12.406 "nvme_io": false, 00:21:12.406 "nvme_io_md": false, 00:21:12.406 "write_zeroes": true, 00:21:12.406 "zcopy": false, 00:21:12.406 "get_zone_info": false, 00:21:12.406 "zone_management": false, 00:21:12.406 "zone_append": false, 00:21:12.406 "compare": false, 00:21:12.406 "compare_and_write": false, 00:21:12.406 "abort": false, 00:21:12.406 "seek_hole": true, 00:21:12.406 "seek_data": true, 00:21:12.406 "copy": false, 00:21:12.406 "nvme_iov_md": false 00:21:12.406 }, 00:21:12.406 "driver_specific": { 00:21:12.406 "lvol": { 00:21:12.406 "lvol_store_uuid": "82e1b0ec-0f06-4c43-9c15-c4fe7cf72adb", 00:21:12.406 "base_bdev": "nvme0n1", 00:21:12.406 "thin_provision": true, 00:21:12.406 "num_allocated_clusters": 0, 00:21:12.406 "snapshot": false, 00:21:12.406 "clone": false, 00:21:12.406 "esnap_clone": false 00:21:12.406 } 00:21:12.406 } 00:21:12.406 } 00:21:12.406 ]' 00:21:12.406 14:02:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:12.406 14:02:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:21:12.406 14:02:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:12.406 14:02:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:12.406 14:02:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:12.406 14:02:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:21:12.406 14:02:36 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:21:12.406 14:02:36 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:21:12.406 14:02:36 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b2edaeb3-6860-4891-bc46-9cef96adf98e -c nvc0n1p0 --l2p_dram_limit 60 00:21:12.665 [2024-07-15 14:02:37.148815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.665 [2024-07-15 14:02:37.148890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:12.665 [2024-07-15 14:02:37.148913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:12.665 [2024-07-15 14:02:37.148929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.665 [2024-07-15 14:02:37.149018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.665 [2024-07-15 14:02:37.149040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:12.665 [2024-07-15 14:02:37.149054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:12.665 [2024-07-15 14:02:37.149067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.665 [2024-07-15 14:02:37.149108] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:12.665 [2024-07-15 14:02:37.150122] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:12.665 [2024-07-15 14:02:37.150159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.665 [2024-07-15 14:02:37.150192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:12.665 [2024-07-15 14:02:37.150207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.057 ms 00:21:12.665 [2024-07-15 14:02:37.150221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.665 [2024-07-15 14:02:37.150366] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 028c641d-4714-454d-a0fa-ef718aa42bfe 00:21:12.665 [2024-07-15 14:02:37.151495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.665 [2024-07-15 14:02:37.151536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:12.665 [2024-07-15 14:02:37.151568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:12.665 [2024-07-15 14:02:37.151581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.665 [2024-07-15 14:02:37.156370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.665 [2024-07-15 14:02:37.156427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:12.665 [2024-07-15 14:02:37.156448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.705 ms 00:21:12.665 [2024-07-15 14:02:37.156464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.665 [2024-07-15 14:02:37.156637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.665 [2024-07-15 14:02:37.156660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:12.665 [2024-07-15 14:02:37.156677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:21:12.665 [2024-07-15 14:02:37.156689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.665 [2024-07-15 14:02:37.156803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.665 [2024-07-15 14:02:37.156821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:12.665 [2024-07-15 14:02:37.156836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:12.665 [2024-07-15 14:02:37.156848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.665 [2024-07-15 14:02:37.156893] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:12.665 [2024-07-15 14:02:37.161547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.665 [2024-07-15 14:02:37.161595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:12.665 [2024-07-15 14:02:37.161617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.666 ms 00:21:12.665 [2024-07-15 14:02:37.161632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.665 [2024-07-15 14:02:37.161711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.665 [2024-07-15 14:02:37.161738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:12.665 [2024-07-15 14:02:37.161752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:12.665 [2024-07-15 14:02:37.161766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.665 [2024-07-15 14:02:37.161838] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:12.665 [2024-07-15 14:02:37.162030] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:12.665 [2024-07-15 14:02:37.162053] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:12.665 [2024-07-15 14:02:37.162074] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:12.665 [2024-07-15 14:02:37.162091] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:12.665 [2024-07-15 14:02:37.162118] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:12.665 [2024-07-15 14:02:37.162131] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:12.665 [2024-07-15 14:02:37.162146] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:12.665 [2024-07-15 14:02:37.162158] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:12.665 [2024-07-15 14:02:37.162188] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:12.665 [2024-07-15 14:02:37.162202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.665 [2024-07-15 14:02:37.162215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:12.665 [2024-07-15 14:02:37.162229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:21:12.666 [2024-07-15 14:02:37.162242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.666 [2024-07-15 14:02:37.162368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.666 [2024-07-15 14:02:37.162390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:12.666 [2024-07-15 14:02:37.162404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:21:12.666 [2024-07-15 14:02:37.162418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.666 [2024-07-15 14:02:37.162539] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:12.666 [2024-07-15 14:02:37.162565] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:12.666 [2024-07-15 14:02:37.162578] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:12.666 [2024-07-15 14:02:37.162592] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.666 [2024-07-15 14:02:37.162605] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:12.666 [2024-07-15 14:02:37.162618] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:12.666 [2024-07-15 14:02:37.162629] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:12.666 [2024-07-15 14:02:37.162641] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:12.666 [2024-07-15 14:02:37.162653] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:12.666 [2024-07-15 14:02:37.162669] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:12.666 [2024-07-15 14:02:37.162680] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:12.666 [2024-07-15 14:02:37.162705] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:12.666 [2024-07-15 14:02:37.162716] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:12.666 [2024-07-15 14:02:37.162731] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:12.666 [2024-07-15 14:02:37.162743] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:12.666 [2024-07-15 14:02:37.162755] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.666 [2024-07-15 14:02:37.162766] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:12.666 [2024-07-15 14:02:37.162781] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:12.666 [2024-07-15 14:02:37.162792] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.666 [2024-07-15 14:02:37.162804] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:12.666 [2024-07-15 14:02:37.162815] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:12.666 [2024-07-15 14:02:37.162828] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:12.666 [2024-07-15 14:02:37.162838] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:12.666 [2024-07-15 14:02:37.162851] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:12.666 [2024-07-15 14:02:37.162862] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:12.666 [2024-07-15 14:02:37.162874] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:12.666 [2024-07-15 14:02:37.162885] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:12.666 [2024-07-15 14:02:37.162897] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:12.666 [2024-07-15 14:02:37.162908] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:12.666 [2024-07-15 14:02:37.162921] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:12.666 [2024-07-15 14:02:37.162931] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:12.666 [2024-07-15 14:02:37.162944] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:12.666 [2024-07-15 14:02:37.162954] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:12.666 [2024-07-15 14:02:37.162969] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:12.666 [2024-07-15 14:02:37.162979] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:12.666 [2024-07-15 14:02:37.162992] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:12.666 [2024-07-15 14:02:37.163003] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:12.666 [2024-07-15 14:02:37.163015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:12.666 [2024-07-15 14:02:37.163026] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:12.666 [2024-07-15 14:02:37.163040] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.666 [2024-07-15 14:02:37.163052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:12.666 [2024-07-15 14:02:37.163066] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:12.666 [2024-07-15 14:02:37.163077] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.666 [2024-07-15 14:02:37.163089] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:12.666 [2024-07-15 14:02:37.163101] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:12.666 [2024-07-15 14:02:37.163137] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:12.666 [2024-07-15 14:02:37.163149] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.666 [2024-07-15 14:02:37.163163] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:12.666 [2024-07-15 14:02:37.163175] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:12.666 [2024-07-15 14:02:37.163190] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:12.666 [2024-07-15 14:02:37.163201] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:12.666 [2024-07-15 14:02:37.163213] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:12.666 [2024-07-15 14:02:37.163225] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:12.666 [2024-07-15 14:02:37.163242] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:12.666 [2024-07-15 14:02:37.163266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:12.666 [2024-07-15 14:02:37.163282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:12.666 [2024-07-15 14:02:37.163294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:12.666 [2024-07-15 14:02:37.163324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:12.666 [2024-07-15 14:02:37.163337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:12.666 [2024-07-15 14:02:37.163351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:12.666 [2024-07-15 14:02:37.163362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:12.666 [2024-07-15 14:02:37.163376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:12.666 [2024-07-15 14:02:37.163388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:12.666 [2024-07-15 14:02:37.163404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:12.666 [2024-07-15 14:02:37.163416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:12.666 [2024-07-15 14:02:37.163433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:12.666 [2024-07-15 14:02:37.163445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:12.666 [2024-07-15 14:02:37.163458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:12.666 [2024-07-15 14:02:37.163470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:12.666 [2024-07-15 14:02:37.163484] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:12.666 [2024-07-15 14:02:37.163500] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:12.666 [2024-07-15 14:02:37.163516] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:12.666 [2024-07-15 14:02:37.163528] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:12.666 [2024-07-15 14:02:37.163548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:12.666 [2024-07-15 14:02:37.163560] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:12.666 [2024-07-15 14:02:37.163576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.666 [2024-07-15 14:02:37.163588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:12.666 [2024-07-15 14:02:37.163602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.100 ms 00:21:12.666 [2024-07-15 14:02:37.163614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.666 [2024-07-15 14:02:37.163697] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:12.666 [2024-07-15 14:02:37.163716] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:16.851 [2024-07-15 14:02:40.737380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.851 [2024-07-15 14:02:40.737473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:16.851 [2024-07-15 14:02:40.737502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3573.687 ms 00:21:16.851 [2024-07-15 14:02:40.737519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.851 [2024-07-15 14:02:40.778089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.851 [2024-07-15 14:02:40.778164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:16.851 [2024-07-15 14:02:40.778217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.201 ms 00:21:16.851 [2024-07-15 14:02:40.778244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.851 [2024-07-15 14:02:40.778521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.851 [2024-07-15 14:02:40.778564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:16.851 [2024-07-15 14:02:40.778587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:21:16.851 [2024-07-15 14:02:40.778602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.851 [2024-07-15 14:02:40.833770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.851 [2024-07-15 14:02:40.833843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:16.851 [2024-07-15 14:02:40.833883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.063 ms 00:21:16.851 [2024-07-15 14:02:40.833900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.851 [2024-07-15 14:02:40.833978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.851 [2024-07-15 14:02:40.833998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:16.851 [2024-07-15 14:02:40.834016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:16.851 [2024-07-15 14:02:40.834035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.851 [2024-07-15 14:02:40.834548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.851 [2024-07-15 14:02:40.834580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:16.851 [2024-07-15 14:02:40.834602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:21:16.851 [2024-07-15 14:02:40.834618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.851 [2024-07-15 14:02:40.834867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.851 [2024-07-15 14:02:40.834901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:16.851 [2024-07-15 14:02:40.834922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:21:16.851 [2024-07-15 14:02:40.834938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.851 [2024-07-15 14:02:40.865679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.851 [2024-07-15 14:02:40.865760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:16.851 [2024-07-15 14:02:40.865790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.689 ms 00:21:16.851 [2024-07-15 14:02:40.865805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.851 [2024-07-15 14:02:40.883509] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:16.852 [2024-07-15 14:02:40.900062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.852 [2024-07-15 14:02:40.900179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:16.852 [2024-07-15 14:02:40.900207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.069 ms 00:21:16.852 [2024-07-15 14:02:40.900224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.852 [2024-07-15 14:02:40.972705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.852 [2024-07-15 14:02:40.972811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:16.852 [2024-07-15 14:02:40.972836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.326 ms 00:21:16.852 [2024-07-15 14:02:40.972854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.852 [2024-07-15 14:02:40.973148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.852 [2024-07-15 14:02:40.973176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:16.852 [2024-07-15 14:02:40.973193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:21:16.852 [2024-07-15 14:02:40.973214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.852 [2024-07-15 14:02:41.012509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.852 [2024-07-15 14:02:41.012594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:16.852 [2024-07-15 14:02:41.012619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.157 ms 00:21:16.852 [2024-07-15 14:02:41.012637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.852 [2024-07-15 14:02:41.051072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.852 [2024-07-15 14:02:41.051158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:16.852 [2024-07-15 14:02:41.051184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.383 ms 00:21:16.852 [2024-07-15 14:02:41.051202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.852 [2024-07-15 14:02:41.052106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.852 [2024-07-15 14:02:41.052149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:16.852 [2024-07-15 14:02:41.052168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.852 ms 00:21:16.852 [2024-07-15 14:02:41.052184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.852 [2024-07-15 14:02:41.171749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.852 [2024-07-15 14:02:41.171827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:16.852 [2024-07-15 14:02:41.171850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 119.468 ms 00:21:16.852 [2024-07-15 14:02:41.171870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.852 [2024-07-15 14:02:41.205230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.852 [2024-07-15 14:02:41.205324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:16.852 [2024-07-15 14:02:41.205348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.288 ms 00:21:16.852 [2024-07-15 14:02:41.205364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.852 [2024-07-15 14:02:41.237577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.852 [2024-07-15 14:02:41.237656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:16.852 [2024-07-15 14:02:41.237678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.141 ms 00:21:16.852 [2024-07-15 14:02:41.237692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.852 [2024-07-15 14:02:41.269898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.852 [2024-07-15 14:02:41.269971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:16.852 [2024-07-15 14:02:41.269993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.132 ms 00:21:16.852 [2024-07-15 14:02:41.270008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.852 [2024-07-15 14:02:41.270092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.852 [2024-07-15 14:02:41.270123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:16.852 [2024-07-15 14:02:41.270138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:16.852 [2024-07-15 14:02:41.270155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.852 [2024-07-15 14:02:41.270362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.852 [2024-07-15 14:02:41.270401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:16.852 [2024-07-15 14:02:41.270416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:21:16.852 [2024-07-15 14:02:41.270430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.852 [2024-07-15 14:02:41.271663] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4122.301 ms, result 0 00:21:16.852 { 00:21:16.852 "name": "ftl0", 00:21:16.852 "uuid": "028c641d-4714-454d-a0fa-ef718aa42bfe" 00:21:16.852 } 00:21:16.852 14:02:41 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:21:16.852 14:02:41 ftl.ftl_fio_basic -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:21:16.852 14:02:41 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:16.852 14:02:41 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local i 00:21:16.852 14:02:41 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:16.852 14:02:41 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:16.852 14:02:41 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:17.110 14:02:41 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:21:17.368 [ 00:21:17.368 { 00:21:17.368 "name": "ftl0", 00:21:17.368 "aliases": [ 00:21:17.368 "028c641d-4714-454d-a0fa-ef718aa42bfe" 00:21:17.368 ], 00:21:17.368 "product_name": "FTL disk", 00:21:17.368 "block_size": 4096, 00:21:17.368 "num_blocks": 20971520, 00:21:17.368 "uuid": "028c641d-4714-454d-a0fa-ef718aa42bfe", 00:21:17.368 "assigned_rate_limits": { 00:21:17.368 "rw_ios_per_sec": 0, 00:21:17.368 "rw_mbytes_per_sec": 0, 00:21:17.368 "r_mbytes_per_sec": 0, 00:21:17.368 "w_mbytes_per_sec": 0 00:21:17.368 }, 00:21:17.368 "claimed": false, 00:21:17.368 "zoned": false, 00:21:17.368 "supported_io_types": { 00:21:17.368 "read": true, 00:21:17.368 "write": true, 00:21:17.368 "unmap": true, 00:21:17.368 "flush": true, 00:21:17.368 "reset": false, 00:21:17.368 "nvme_admin": false, 00:21:17.368 "nvme_io": false, 00:21:17.368 "nvme_io_md": false, 00:21:17.368 "write_zeroes": true, 00:21:17.368 "zcopy": false, 00:21:17.368 "get_zone_info": false, 00:21:17.368 "zone_management": false, 00:21:17.368 "zone_append": false, 00:21:17.368 "compare": false, 00:21:17.368 "compare_and_write": false, 00:21:17.368 "abort": false, 00:21:17.368 "seek_hole": false, 00:21:17.368 "seek_data": false, 00:21:17.368 "copy": false, 00:21:17.368 "nvme_iov_md": false 00:21:17.368 }, 00:21:17.368 "driver_specific": { 00:21:17.368 "ftl": { 00:21:17.368 "base_bdev": "b2edaeb3-6860-4891-bc46-9cef96adf98e", 00:21:17.368 "cache": "nvc0n1p0" 00:21:17.368 } 00:21:17.368 } 00:21:17.368 } 00:21:17.368 ] 00:21:17.368 14:02:41 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # return 0 00:21:17.368 14:02:41 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:21:17.368 14:02:41 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:17.934 14:02:42 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:21:17.934 14:02:42 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:17.934 [2024-07-15 14:02:42.453126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.934 [2024-07-15 14:02:42.453198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:17.934 [2024-07-15 14:02:42.453227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:17.934 [2024-07-15 14:02:42.453240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.934 [2024-07-15 14:02:42.453291] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:17.934 [2024-07-15 14:02:42.456721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.934 [2024-07-15 14:02:42.456767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:17.934 [2024-07-15 14:02:42.456784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.389 ms 00:21:17.934 [2024-07-15 14:02:42.456798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.934 [2024-07-15 14:02:42.457288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.934 [2024-07-15 14:02:42.457365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:17.934 [2024-07-15 14:02:42.457384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:21:17.934 [2024-07-15 14:02:42.457401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.934 [2024-07-15 14:02:42.460764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.934 [2024-07-15 14:02:42.460810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:17.934 [2024-07-15 14:02:42.460828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.325 ms 00:21:17.934 [2024-07-15 14:02:42.460841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.934 [2024-07-15 14:02:42.467660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.934 [2024-07-15 14:02:42.467706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:17.934 [2024-07-15 14:02:42.467722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.786 ms 00:21:17.934 [2024-07-15 14:02:42.467736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.195 [2024-07-15 14:02:42.499774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.195 [2024-07-15 14:02:42.499851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:18.195 [2024-07-15 14:02:42.499874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.913 ms 00:21:18.195 [2024-07-15 14:02:42.499889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.195 [2024-07-15 14:02:42.519462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.195 [2024-07-15 14:02:42.519543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:18.195 [2024-07-15 14:02:42.519565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.505 ms 00:21:18.195 [2024-07-15 14:02:42.519580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.195 [2024-07-15 14:02:42.519894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.195 [2024-07-15 14:02:42.519924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:18.195 [2024-07-15 14:02:42.519940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:21:18.195 [2024-07-15 14:02:42.519954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.195 [2024-07-15 14:02:42.552550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.195 [2024-07-15 14:02:42.552656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:18.195 [2024-07-15 14:02:42.552678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.558 ms 00:21:18.195 [2024-07-15 14:02:42.552693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.195 [2024-07-15 14:02:42.586142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.195 [2024-07-15 14:02:42.586272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:18.195 [2024-07-15 14:02:42.586295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.351 ms 00:21:18.195 [2024-07-15 14:02:42.586348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.195 [2024-07-15 14:02:42.618458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.195 [2024-07-15 14:02:42.618574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:18.195 [2024-07-15 14:02:42.618598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.008 ms 00:21:18.195 [2024-07-15 14:02:42.618613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.195 [2024-07-15 14:02:42.651848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.195 [2024-07-15 14:02:42.651960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:18.195 [2024-07-15 14:02:42.651983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.997 ms 00:21:18.195 [2024-07-15 14:02:42.651998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.195 [2024-07-15 14:02:42.652084] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:18.195 [2024-07-15 14:02:42.652115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:18.195 [2024-07-15 14:02:42.652646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.652660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.652672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.652687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.652700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.652714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.652726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.652740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.652752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.652766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.652779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.652794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.652807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.652821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.652833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.652847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.652859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.652873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.652885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.652901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.652914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.652927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.652939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.652953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.652965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.652979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.652991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:18.196 [2024-07-15 14:02:42.653574] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:18.196 [2024-07-15 14:02:42.653591] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 028c641d-4714-454d-a0fa-ef718aa42bfe 00:21:18.196 [2024-07-15 14:02:42.653607] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:18.196 [2024-07-15 14:02:42.653622] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:18.197 [2024-07-15 14:02:42.653638] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:18.197 [2024-07-15 14:02:42.653650] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:18.197 [2024-07-15 14:02:42.653664] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:18.197 [2024-07-15 14:02:42.653676] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:18.197 [2024-07-15 14:02:42.653689] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:18.197 [2024-07-15 14:02:42.653700] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:18.197 [2024-07-15 14:02:42.653712] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:18.197 [2024-07-15 14:02:42.653724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.197 [2024-07-15 14:02:42.653738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:18.197 [2024-07-15 14:02:42.653751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.643 ms 00:21:18.197 [2024-07-15 14:02:42.653765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.197 [2024-07-15 14:02:42.671161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.197 [2024-07-15 14:02:42.671232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:18.197 [2024-07-15 14:02:42.671252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.290 ms 00:21:18.197 [2024-07-15 14:02:42.671271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.197 [2024-07-15 14:02:42.671762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.197 [2024-07-15 14:02:42.671812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:18.197 [2024-07-15 14:02:42.671828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:21:18.197 [2024-07-15 14:02:42.671842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.197 [2024-07-15 14:02:42.730612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.197 [2024-07-15 14:02:42.730702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:18.197 [2024-07-15 14:02:42.730724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.197 [2024-07-15 14:02:42.730739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.197 [2024-07-15 14:02:42.730833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.197 [2024-07-15 14:02:42.730854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:18.197 [2024-07-15 14:02:42.730867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.197 [2024-07-15 14:02:42.730881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.197 [2024-07-15 14:02:42.731040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.197 [2024-07-15 14:02:42.731066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:18.197 [2024-07-15 14:02:42.731081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.197 [2024-07-15 14:02:42.731094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.197 [2024-07-15 14:02:42.731129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.197 [2024-07-15 14:02:42.731149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:18.197 [2024-07-15 14:02:42.731162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.197 [2024-07-15 14:02:42.731176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.455 [2024-07-15 14:02:42.839885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.455 [2024-07-15 14:02:42.839986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:18.455 [2024-07-15 14:02:42.840008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.455 [2024-07-15 14:02:42.840023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.455 [2024-07-15 14:02:42.927641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.455 [2024-07-15 14:02:42.927726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:18.455 [2024-07-15 14:02:42.927747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.455 [2024-07-15 14:02:42.927762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.455 [2024-07-15 14:02:42.927878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.455 [2024-07-15 14:02:42.927905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:18.455 [2024-07-15 14:02:42.927925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.455 [2024-07-15 14:02:42.927940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.455 [2024-07-15 14:02:42.928022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.455 [2024-07-15 14:02:42.928053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:18.455 [2024-07-15 14:02:42.928066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.455 [2024-07-15 14:02:42.928079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.455 [2024-07-15 14:02:42.928221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.455 [2024-07-15 14:02:42.928247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:18.455 [2024-07-15 14:02:42.928263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.455 [2024-07-15 14:02:42.928276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.455 [2024-07-15 14:02:42.928372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.455 [2024-07-15 14:02:42.928397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:18.455 [2024-07-15 14:02:42.928410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.455 [2024-07-15 14:02:42.928424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.455 [2024-07-15 14:02:42.928480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.455 [2024-07-15 14:02:42.928499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:18.455 [2024-07-15 14:02:42.928515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.455 [2024-07-15 14:02:42.928528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.455 [2024-07-15 14:02:42.928592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.455 [2024-07-15 14:02:42.928616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:18.455 [2024-07-15 14:02:42.928629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.455 [2024-07-15 14:02:42.928642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.456 [2024-07-15 14:02:42.928853] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 475.690 ms, result 0 00:21:18.456 true 00:21:18.456 14:02:42 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 79611 00:21:18.456 14:02:42 ftl.ftl_fio_basic -- common/autotest_common.sh@948 -- # '[' -z 79611 ']' 00:21:18.456 14:02:42 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # kill -0 79611 00:21:18.456 14:02:42 ftl.ftl_fio_basic -- common/autotest_common.sh@953 -- # uname 00:21:18.456 14:02:42 ftl.ftl_fio_basic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:18.456 14:02:42 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79611 00:21:18.456 killing process with pid 79611 00:21:18.456 14:02:42 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:18.456 14:02:42 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:18.456 14:02:42 ftl.ftl_fio_basic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79611' 00:21:18.456 14:02:42 ftl.ftl_fio_basic -- common/autotest_common.sh@967 -- # kill 79611 00:21:18.456 14:02:42 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # wait 79611 00:21:23.716 14:02:47 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:21:23.716 14:02:47 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:21:23.716 14:02:47 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:21:23.716 14:02:47 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:23.716 14:02:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:23.716 14:02:47 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:21:23.716 14:02:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:21:23.716 14:02:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:23.716 14:02:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:23.716 14:02:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:23.716 14:02:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:23.716 14:02:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:21:23.716 14:02:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:23.716 14:02:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:23.716 14:02:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:23.716 14:02:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:21:23.716 14:02:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:23.716 14:02:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:23.716 14:02:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:23.716 14:02:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:21:23.716 14:02:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:23.716 14:02:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:21:23.716 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:21:23.716 fio-3.35 00:21:23.716 Starting 1 thread 00:21:28.976 00:21:28.976 test: (groupid=0, jobs=1): err= 0: pid=79828: Mon Jul 15 14:02:52 2024 00:21:28.976 read: IOPS=1080, BW=71.8MiB/s (75.3MB/s)(255MiB/3546msec) 00:21:28.976 slat (nsec): min=5736, max=58300, avg=8493.17, stdev=4598.16 00:21:28.976 clat (usec): min=283, max=2416, avg=411.35, stdev=59.16 00:21:28.976 lat (usec): min=291, max=2423, avg=419.84, stdev=59.90 00:21:28.976 clat percentiles (usec): 00:21:28.976 | 1.00th=[ 338], 5.00th=[ 359], 10.00th=[ 371], 20.00th=[ 375], 00:21:28.976 | 30.00th=[ 375], 40.00th=[ 383], 50.00th=[ 392], 60.00th=[ 404], 00:21:28.976 | 70.00th=[ 441], 80.00th=[ 449], 90.00th=[ 469], 95.00th=[ 515], 00:21:28.976 | 99.00th=[ 553], 99.50th=[ 578], 99.90th=[ 652], 99.95th=[ 1004], 00:21:28.976 | 99.99th=[ 2409] 00:21:28.976 write: IOPS=1088, BW=72.3MiB/s (75.8MB/s)(256MiB/3542msec); 0 zone resets 00:21:28.976 slat (usec): min=20, max=159, avg=26.05, stdev= 7.33 00:21:28.976 clat (usec): min=335, max=930, avg=461.48, stdev=55.45 00:21:28.976 lat (usec): min=358, max=960, avg=487.53, stdev=56.70 00:21:28.976 clat percentiles (usec): 00:21:28.976 | 1.00th=[ 379], 5.00th=[ 396], 10.00th=[ 400], 20.00th=[ 408], 00:21:28.976 | 30.00th=[ 420], 40.00th=[ 453], 50.00th=[ 465], 60.00th=[ 474], 00:21:28.976 | 70.00th=[ 478], 80.00th=[ 490], 90.00th=[ 537], 95.00th=[ 553], 00:21:28.976 | 99.00th=[ 668], 99.50th=[ 701], 99.90th=[ 766], 99.95th=[ 791], 00:21:28.976 | 99.99th=[ 930] 00:21:28.976 bw ( KiB/s): min=71536, max=75752, per=100.00%, avg=74120.00, stdev=1698.64, samples=7 00:21:28.976 iops : min= 1052, max= 1114, avg=1090.00, stdev=24.98, samples=7 00:21:28.976 lat (usec) : 500=88.52%, 750=11.35%, 1000=0.12% 00:21:28.976 lat (msec) : 4=0.01% 00:21:28.976 cpu : usr=99.13%, sys=0.06%, ctx=5, majf=0, minf=1172 00:21:28.976 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:28.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.976 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.976 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:28.976 00:21:28.976 Run status group 0 (all jobs): 00:21:28.976 READ: bw=71.8MiB/s (75.3MB/s), 71.8MiB/s-71.8MiB/s (75.3MB/s-75.3MB/s), io=255MiB (267MB), run=3546-3546msec 00:21:28.976 WRITE: bw=72.3MiB/s (75.8MB/s), 72.3MiB/s-72.3MiB/s (75.8MB/s-75.8MB/s), io=256MiB (269MB), run=3542-3542msec 00:21:29.909 ----------------------------------------------------- 00:21:29.909 Suppressions used: 00:21:29.909 count bytes template 00:21:29.909 1 5 /usr/src/fio/parse.c 00:21:29.909 1 8 libtcmalloc_minimal.so 00:21:29.909 1 904 libcrypto.so 00:21:29.909 ----------------------------------------------------- 00:21:29.909 00:21:29.909 14:02:54 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:21:29.909 14:02:54 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:29.909 14:02:54 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:30.174 14:02:54 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:21:30.174 14:02:54 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:21:30.174 14:02:54 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:30.174 14:02:54 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:30.174 14:02:54 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:21:30.174 14:02:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:21:30.174 14:02:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:30.174 14:02:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:30.174 14:02:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:30.174 14:02:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:30.174 14:02:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:21:30.174 14:02:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:30.174 14:02:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:30.174 14:02:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:30.174 14:02:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:21:30.174 14:02:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:30.174 14:02:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:30.174 14:02:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:30.174 14:02:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:21:30.174 14:02:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:30.174 14:02:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:21:30.174 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:21:30.174 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:21:30.174 fio-3.35 00:21:30.174 Starting 2 threads 00:22:02.350 00:22:02.350 first_half: (groupid=0, jobs=1): err= 0: pid=79925: Mon Jul 15 14:03:25 2024 00:22:02.350 read: IOPS=2226, BW=8906KiB/s (9120kB/s)(256MiB/29407msec) 00:22:02.350 slat (nsec): min=4710, max=39482, avg=8081.22, stdev=2043.11 00:22:02.350 clat (usec): min=745, max=303090, avg=48502.77, stdev=31544.11 00:22:02.350 lat (usec): min=751, max=303099, avg=48510.85, stdev=31544.43 00:22:02.350 clat percentiles (msec): 00:22:02.350 | 1.00th=[ 11], 5.00th=[ 38], 10.00th=[ 39], 20.00th=[ 39], 00:22:02.350 | 30.00th=[ 39], 40.00th=[ 40], 50.00th=[ 41], 60.00th=[ 43], 00:22:02.350 | 70.00th=[ 45], 80.00th=[ 48], 90.00th=[ 54], 95.00th=[ 96], 00:22:02.350 | 99.00th=[ 218], 99.50th=[ 236], 99.90th=[ 268], 99.95th=[ 275], 00:22:02.350 | 99.99th=[ 296] 00:22:02.350 write: IOPS=2231, BW=8927KiB/s (9141kB/s)(256MiB/29366msec); 0 zone resets 00:22:02.350 slat (usec): min=6, max=186, avg= 9.23, stdev= 4.74 00:22:02.350 clat (usec): min=480, max=58464, avg=8938.74, stdev=8924.12 00:22:02.350 lat (usec): min=500, max=58474, avg=8947.97, stdev=8924.41 00:22:02.350 clat percentiles (usec): 00:22:02.350 | 1.00th=[ 1123], 5.00th=[ 1582], 10.00th=[ 1958], 20.00th=[ 3687], 00:22:02.350 | 30.00th=[ 4883], 40.00th=[ 5866], 50.00th=[ 6849], 60.00th=[ 7701], 00:22:02.350 | 70.00th=[ 8717], 80.00th=[10421], 90.00th=[17957], 95.00th=[25035], 00:22:02.350 | 99.00th=[48497], 99.50th=[52167], 99.90th=[56361], 99.95th=[56886], 00:22:02.350 | 99.99th=[57934] 00:22:02.350 bw ( KiB/s): min= 112, max=45408, per=100.00%, avg=20821.72, stdev=12129.86, samples=25 00:22:02.350 iops : min= 28, max=11352, avg=5205.40, stdev=3032.45, samples=25 00:22:02.350 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.24% 00:22:02.350 lat (msec) : 2=5.00%, 4=5.94%, 10=28.23%, 20=8.31%, 50=42.92% 00:22:02.350 lat (msec) : 100=6.93%, 250=2.29%, 500=0.11% 00:22:02.350 cpu : usr=99.09%, sys=0.14%, ctx=60, majf=0, minf=5556 00:22:02.350 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:02.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:02.350 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:02.350 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:02.350 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:02.350 second_half: (groupid=0, jobs=1): err= 0: pid=79926: Mon Jul 15 14:03:25 2024 00:22:02.350 read: IOPS=2244, BW=8978KiB/s (9194kB/s)(256MiB/29177msec) 00:22:02.350 slat (nsec): min=4896, max=69039, avg=8122.03, stdev=2090.81 00:22:02.350 clat (msec): min=12, max=290, avg=48.99, stdev=28.42 00:22:02.350 lat (msec): min=12, max=290, avg=49.00, stdev=28.42 00:22:02.350 clat percentiles (msec): 00:22:02.350 | 1.00th=[ 37], 5.00th=[ 38], 10.00th=[ 39], 20.00th=[ 39], 00:22:02.350 | 30.00th=[ 39], 40.00th=[ 40], 50.00th=[ 41], 60.00th=[ 43], 00:22:02.350 | 70.00th=[ 45], 80.00th=[ 50], 90.00th=[ 56], 95.00th=[ 93], 00:22:02.350 | 99.00th=[ 205], 99.50th=[ 222], 99.90th=[ 243], 99.95th=[ 249], 00:22:02.350 | 99.99th=[ 279] 00:22:02.350 write: IOPS=2412, BW=9652KiB/s (9883kB/s)(256MiB/27160msec); 0 zone resets 00:22:02.350 slat (usec): min=6, max=245, avg= 9.29, stdev= 4.92 00:22:02.350 clat (usec): min=444, max=59241, avg=8007.33, stdev=5343.98 00:22:02.350 lat (usec): min=462, max=59249, avg=8016.62, stdev=5344.45 00:22:02.350 clat percentiles (usec): 00:22:02.350 | 1.00th=[ 1352], 5.00th=[ 2278], 10.00th=[ 3195], 20.00th=[ 4293], 00:22:02.350 | 30.00th=[ 5276], 40.00th=[ 6194], 50.00th=[ 6980], 60.00th=[ 7701], 00:22:02.350 | 70.00th=[ 8455], 80.00th=[ 9896], 90.00th=[14877], 95.00th=[18744], 00:22:02.350 | 99.00th=[26870], 99.50th=[36963], 99.90th=[47973], 99.95th=[49546], 00:22:02.350 | 99.99th=[56886] 00:22:02.350 bw ( KiB/s): min= 88, max=39368, per=100.00%, avg=22795.13, stdev=12008.37, samples=23 00:22:02.350 iops : min= 22, max= 9842, avg=5698.78, stdev=3002.09, samples=23 00:22:02.350 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.12% 00:22:02.350 lat (msec) : 2=1.56%, 4=6.73%, 10=31.75%, 20=8.32%, 50=41.91% 00:22:02.350 lat (msec) : 100=7.27%, 250=2.29%, 500=0.02% 00:22:02.350 cpu : usr=99.02%, sys=0.18%, ctx=38, majf=0, minf=5563 00:22:02.350 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:02.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:02.350 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:02.350 issued rwts: total=65488,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:02.350 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:02.350 00:22:02.350 Run status group 0 (all jobs): 00:22:02.350 READ: bw=17.4MiB/s (18.2MB/s), 8906KiB/s-8978KiB/s (9120kB/s-9194kB/s), io=512MiB (536MB), run=29177-29407msec 00:22:02.350 WRITE: bw=17.4MiB/s (18.3MB/s), 8927KiB/s-9652KiB/s (9141kB/s-9883kB/s), io=512MiB (537MB), run=27160-29366msec 00:22:03.723 ----------------------------------------------------- 00:22:03.723 Suppressions used: 00:22:03.723 count bytes template 00:22:03.723 2 10 /usr/src/fio/parse.c 00:22:03.723 3 288 /usr/src/fio/iolog.c 00:22:03.723 1 8 libtcmalloc_minimal.so 00:22:03.723 1 904 libcrypto.so 00:22:03.723 ----------------------------------------------------- 00:22:03.723 00:22:03.723 14:03:27 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:22:03.723 14:03:27 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:03.723 14:03:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:03.723 14:03:27 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:22:03.723 14:03:27 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:22:03.723 14:03:27 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:03.723 14:03:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:03.723 14:03:27 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:22:03.723 14:03:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:22:03.723 14:03:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:03.723 14:03:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:03.723 14:03:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:03.723 14:03:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:03.723 14:03:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:22:03.723 14:03:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:03.723 14:03:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:03.723 14:03:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:03.723 14:03:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:22:03.723 14:03:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:03.723 14:03:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:03.723 14:03:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:03.723 14:03:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:22:03.723 14:03:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:03.723 14:03:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:22:03.723 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:22:03.723 fio-3.35 00:22:03.723 Starting 1 thread 00:22:21.946 00:22:21.946 test: (groupid=0, jobs=1): err= 0: pid=80290: Mon Jul 15 14:03:46 2024 00:22:21.946 read: IOPS=6225, BW=24.3MiB/s (25.5MB/s)(255MiB/10474msec) 00:22:21.946 slat (nsec): min=4665, max=47843, avg=7263.88, stdev=2116.30 00:22:21.946 clat (usec): min=769, max=39358, avg=20549.27, stdev=1943.16 00:22:21.946 lat (usec): min=786, max=39366, avg=20556.53, stdev=1943.18 00:22:21.946 clat percentiles (usec): 00:22:21.946 | 1.00th=[18744], 5.00th=[19006], 10.00th=[19268], 20.00th=[19268], 00:22:21.946 | 30.00th=[19530], 40.00th=[19792], 50.00th=[19792], 60.00th=[20055], 00:22:21.946 | 70.00th=[20579], 80.00th=[21103], 90.00th=[23200], 95.00th=[24511], 00:22:21.946 | 99.00th=[28181], 99.50th=[28967], 99.90th=[30278], 99.95th=[34341], 00:22:21.946 | 99.99th=[38536] 00:22:21.946 write: IOPS=10.8k, BW=42.1MiB/s (44.1MB/s)(256MiB/6087msec); 0 zone resets 00:22:21.946 slat (usec): min=5, max=318, avg= 9.91, stdev= 5.14 00:22:21.946 clat (usec): min=702, max=71801, avg=11827.48, stdev=15122.41 00:22:21.946 lat (usec): min=713, max=71810, avg=11837.39, stdev=15122.53 00:22:21.946 clat percentiles (usec): 00:22:21.946 | 1.00th=[ 1012], 5.00th=[ 1237], 10.00th=[ 1369], 20.00th=[ 1565], 00:22:21.946 | 30.00th=[ 1795], 40.00th=[ 2343], 50.00th=[ 7504], 60.00th=[ 8848], 00:22:21.946 | 70.00th=[10290], 80.00th=[12256], 90.00th=[42730], 95.00th=[46924], 00:22:21.946 | 99.00th=[56361], 99.50th=[58459], 99.90th=[67634], 99.95th=[68682], 00:22:21.946 | 99.99th=[70779] 00:22:21.946 bw ( KiB/s): min= 5120, max=60760, per=93.64%, avg=40329.85, stdev=13691.80, samples=13 00:22:21.946 iops : min= 1280, max=15190, avg=10082.46, stdev=3422.95, samples=13 00:22:21.946 lat (usec) : 750=0.01%, 1000=0.43% 00:22:21.946 lat (msec) : 2=17.56%, 4=2.90%, 10=13.33%, 20=34.48%, 50=29.92% 00:22:21.946 lat (msec) : 100=1.37% 00:22:21.946 cpu : usr=98.79%, sys=0.27%, ctx=22, majf=0, minf=5568 00:22:21.946 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:21.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.946 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:21.946 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.946 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.946 00:22:21.946 Run status group 0 (all jobs): 00:22:21.946 READ: bw=24.3MiB/s (25.5MB/s), 24.3MiB/s-24.3MiB/s (25.5MB/s-25.5MB/s), io=255MiB (267MB), run=10474-10474msec 00:22:21.946 WRITE: bw=42.1MiB/s (44.1MB/s), 42.1MiB/s-42.1MiB/s (44.1MB/s-44.1MB/s), io=256MiB (268MB), run=6087-6087msec 00:22:23.315 ----------------------------------------------------- 00:22:23.315 Suppressions used: 00:22:23.315 count bytes template 00:22:23.315 1 5 /usr/src/fio/parse.c 00:22:23.315 2 192 /usr/src/fio/iolog.c 00:22:23.315 1 8 libtcmalloc_minimal.so 00:22:23.315 1 904 libcrypto.so 00:22:23.315 ----------------------------------------------------- 00:22:23.315 00:22:23.315 14:03:47 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:22:23.315 14:03:47 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:23.315 14:03:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:23.315 14:03:47 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:23.315 Remove shared memory files 00:22:23.315 14:03:47 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:22:23.315 14:03:47 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:23.315 14:03:47 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:22:23.315 14:03:47 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:22:23.315 14:03:47 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid62442 /dev/shm/spdk_tgt_trace.pid78550 00:22:23.315 14:03:47 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:23.315 14:03:47 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:22:23.315 ************************************ 00:22:23.315 END TEST ftl_fio_basic 00:22:23.315 ************************************ 00:22:23.315 00:22:23.315 real 1m15.784s 00:22:23.315 user 2m50.880s 00:22:23.315 sys 0m3.768s 00:22:23.315 14:03:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:23.315 14:03:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:23.315 14:03:47 ftl -- common/autotest_common.sh@1142 -- # return 0 00:22:23.315 14:03:47 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:22:23.315 14:03:47 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:23.315 14:03:47 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:23.315 14:03:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:23.315 ************************************ 00:22:23.315 START TEST ftl_bdevperf 00:22:23.315 ************************************ 00:22:23.315 14:03:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:22:23.315 * Looking for test storage... 00:22:23.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:23.315 14:03:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:23.315 14:03:47 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:22:23.572 14:03:47 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:23.572 14:03:47 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:23.572 14:03:47 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:23.572 14:03:47 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:23.572 14:03:47 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:23.572 14:03:47 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:23.572 14:03:47 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:23.572 14:03:47 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:23.572 14:03:47 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:23.572 14:03:47 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:23.572 14:03:47 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:23.572 14:03:47 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:23.572 14:03:47 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:23.572 14:03:47 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:23.572 14:03:47 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:23.572 14:03:47 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:23.572 14:03:47 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:23.572 14:03:47 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:23.572 14:03:47 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:23.572 14:03:47 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:23.572 14:03:47 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:23.572 14:03:47 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:23.573 14:03:47 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:23.573 14:03:47 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:23.573 14:03:47 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:23.573 14:03:47 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:23.573 14:03:47 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:23.573 14:03:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:22:23.573 14:03:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:22:23.573 14:03:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:22:23.573 14:03:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:23.573 14:03:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:22:23.573 14:03:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:22:23.573 14:03:47 ftl.ftl_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:23.573 14:03:47 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:23.573 14:03:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=80551 00:22:23.573 14:03:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:22:23.573 14:03:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:22:23.573 14:03:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 80551 00:22:23.573 14:03:47 ftl.ftl_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 80551 ']' 00:22:23.573 14:03:47 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.573 14:03:47 ftl.ftl_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:23.573 14:03:47 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.573 14:03:47 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:23.573 14:03:47 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:23.573 [2024-07-15 14:03:48.007245] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:22:23.573 [2024-07-15 14:03:48.007484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80551 ] 00:22:23.830 [2024-07-15 14:03:48.196870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.087 [2024-07-15 14:03:48.388441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.652 14:03:48 ftl.ftl_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:24.652 14:03:48 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:22:24.652 14:03:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:24.652 14:03:48 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:22:24.652 14:03:48 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:24.652 14:03:48 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:22:24.652 14:03:48 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:22:24.652 14:03:48 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:24.909 14:03:49 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:24.909 14:03:49 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:22:24.909 14:03:49 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:24.909 14:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:22:24.909 14:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:24.909 14:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:22:24.909 14:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:22:24.909 14:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:25.167 14:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:25.167 { 00:22:25.167 "name": "nvme0n1", 00:22:25.167 "aliases": [ 00:22:25.167 "43c513a9-39a1-43f7-966a-0adab477eb13" 00:22:25.167 ], 00:22:25.167 "product_name": "NVMe disk", 00:22:25.167 "block_size": 4096, 00:22:25.167 "num_blocks": 1310720, 00:22:25.167 "uuid": "43c513a9-39a1-43f7-966a-0adab477eb13", 00:22:25.167 "assigned_rate_limits": { 00:22:25.167 "rw_ios_per_sec": 0, 00:22:25.167 "rw_mbytes_per_sec": 0, 00:22:25.167 "r_mbytes_per_sec": 0, 00:22:25.167 "w_mbytes_per_sec": 0 00:22:25.167 }, 00:22:25.167 "claimed": true, 00:22:25.167 "claim_type": "read_many_write_one", 00:22:25.167 "zoned": false, 00:22:25.167 "supported_io_types": { 00:22:25.167 "read": true, 00:22:25.167 "write": true, 00:22:25.167 "unmap": true, 00:22:25.167 "flush": true, 00:22:25.167 "reset": true, 00:22:25.167 "nvme_admin": true, 00:22:25.167 "nvme_io": true, 00:22:25.167 "nvme_io_md": false, 00:22:25.167 "write_zeroes": true, 00:22:25.167 "zcopy": false, 00:22:25.167 "get_zone_info": false, 00:22:25.167 "zone_management": false, 00:22:25.167 "zone_append": false, 00:22:25.167 "compare": true, 00:22:25.167 "compare_and_write": false, 00:22:25.167 "abort": true, 00:22:25.167 "seek_hole": false, 00:22:25.167 "seek_data": false, 00:22:25.167 "copy": true, 00:22:25.167 "nvme_iov_md": false 00:22:25.167 }, 00:22:25.167 "driver_specific": { 00:22:25.167 "nvme": [ 00:22:25.167 { 00:22:25.167 "pci_address": "0000:00:11.0", 00:22:25.167 "trid": { 00:22:25.167 "trtype": "PCIe", 00:22:25.167 "traddr": "0000:00:11.0" 00:22:25.167 }, 00:22:25.167 "ctrlr_data": { 00:22:25.167 "cntlid": 0, 00:22:25.167 "vendor_id": "0x1b36", 00:22:25.167 "model_number": "QEMU NVMe Ctrl", 00:22:25.167 "serial_number": "12341", 00:22:25.167 "firmware_revision": "8.0.0", 00:22:25.167 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:25.167 "oacs": { 00:22:25.167 "security": 0, 00:22:25.167 "format": 1, 00:22:25.167 "firmware": 0, 00:22:25.167 "ns_manage": 1 00:22:25.167 }, 00:22:25.167 "multi_ctrlr": false, 00:22:25.167 "ana_reporting": false 00:22:25.167 }, 00:22:25.167 "vs": { 00:22:25.167 "nvme_version": "1.4" 00:22:25.167 }, 00:22:25.167 "ns_data": { 00:22:25.167 "id": 1, 00:22:25.167 "can_share": false 00:22:25.167 } 00:22:25.167 } 00:22:25.167 ], 00:22:25.167 "mp_policy": "active_passive" 00:22:25.167 } 00:22:25.167 } 00:22:25.167 ]' 00:22:25.167 14:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:25.167 14:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:22:25.167 14:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:25.167 14:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:25.167 14:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:25.167 14:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:22:25.167 14:03:49 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:22:25.167 14:03:49 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:25.167 14:03:49 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:22:25.167 14:03:49 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:25.167 14:03:49 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:25.424 14:03:49 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=82e1b0ec-0f06-4c43-9c15-c4fe7cf72adb 00:22:25.424 14:03:49 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:22:25.424 14:03:49 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 82e1b0ec-0f06-4c43-9c15-c4fe7cf72adb 00:22:25.682 14:03:50 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:26.246 14:03:50 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=3f29c725-b252-4027-b3d1-0b2c0b387695 00:22:26.246 14:03:50 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3f29c725-b252-4027-b3d1-0b2c0b387695 00:22:26.504 14:03:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=845c8a29-d0be-446d-a321-69836b68eda7 00:22:26.504 14:03:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 845c8a29-d0be-446d-a321-69836b68eda7 00:22:26.504 14:03:50 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:22:26.504 14:03:50 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:26.504 14:03:50 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=845c8a29-d0be-446d-a321-69836b68eda7 00:22:26.504 14:03:50 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:22:26.504 14:03:50 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 845c8a29-d0be-446d-a321-69836b68eda7 00:22:26.504 14:03:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=845c8a29-d0be-446d-a321-69836b68eda7 00:22:26.504 14:03:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:26.504 14:03:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:22:26.504 14:03:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:22:26.504 14:03:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 845c8a29-d0be-446d-a321-69836b68eda7 00:22:26.761 14:03:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:26.761 { 00:22:26.761 "name": "845c8a29-d0be-446d-a321-69836b68eda7", 00:22:26.761 "aliases": [ 00:22:26.761 "lvs/nvme0n1p0" 00:22:26.761 ], 00:22:26.761 "product_name": "Logical Volume", 00:22:26.761 "block_size": 4096, 00:22:26.761 "num_blocks": 26476544, 00:22:26.761 "uuid": "845c8a29-d0be-446d-a321-69836b68eda7", 00:22:26.761 "assigned_rate_limits": { 00:22:26.761 "rw_ios_per_sec": 0, 00:22:26.761 "rw_mbytes_per_sec": 0, 00:22:26.761 "r_mbytes_per_sec": 0, 00:22:26.761 "w_mbytes_per_sec": 0 00:22:26.761 }, 00:22:26.761 "claimed": false, 00:22:26.761 "zoned": false, 00:22:26.761 "supported_io_types": { 00:22:26.761 "read": true, 00:22:26.761 "write": true, 00:22:26.761 "unmap": true, 00:22:26.761 "flush": false, 00:22:26.761 "reset": true, 00:22:26.761 "nvme_admin": false, 00:22:26.761 "nvme_io": false, 00:22:26.761 "nvme_io_md": false, 00:22:26.761 "write_zeroes": true, 00:22:26.761 "zcopy": false, 00:22:26.761 "get_zone_info": false, 00:22:26.761 "zone_management": false, 00:22:26.761 "zone_append": false, 00:22:26.761 "compare": false, 00:22:26.761 "compare_and_write": false, 00:22:26.761 "abort": false, 00:22:26.761 "seek_hole": true, 00:22:26.761 "seek_data": true, 00:22:26.761 "copy": false, 00:22:26.761 "nvme_iov_md": false 00:22:26.761 }, 00:22:26.761 "driver_specific": { 00:22:26.761 "lvol": { 00:22:26.761 "lvol_store_uuid": "3f29c725-b252-4027-b3d1-0b2c0b387695", 00:22:26.761 "base_bdev": "nvme0n1", 00:22:26.761 "thin_provision": true, 00:22:26.761 "num_allocated_clusters": 0, 00:22:26.761 "snapshot": false, 00:22:26.761 "clone": false, 00:22:26.761 "esnap_clone": false 00:22:26.761 } 00:22:26.761 } 00:22:26.761 } 00:22:26.761 ]' 00:22:26.761 14:03:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:26.761 14:03:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:22:26.761 14:03:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:26.761 14:03:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:26.761 14:03:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:26.761 14:03:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:22:26.761 14:03:51 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:22:26.761 14:03:51 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:22:26.761 14:03:51 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:27.019 14:03:51 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:27.019 14:03:51 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:27.019 14:03:51 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 845c8a29-d0be-446d-a321-69836b68eda7 00:22:27.019 14:03:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=845c8a29-d0be-446d-a321-69836b68eda7 00:22:27.019 14:03:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:27.019 14:03:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:22:27.019 14:03:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:22:27.019 14:03:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 845c8a29-d0be-446d-a321-69836b68eda7 00:22:27.275 14:03:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:27.275 { 00:22:27.275 "name": "845c8a29-d0be-446d-a321-69836b68eda7", 00:22:27.275 "aliases": [ 00:22:27.275 "lvs/nvme0n1p0" 00:22:27.275 ], 00:22:27.275 "product_name": "Logical Volume", 00:22:27.275 "block_size": 4096, 00:22:27.275 "num_blocks": 26476544, 00:22:27.275 "uuid": "845c8a29-d0be-446d-a321-69836b68eda7", 00:22:27.275 "assigned_rate_limits": { 00:22:27.275 "rw_ios_per_sec": 0, 00:22:27.275 "rw_mbytes_per_sec": 0, 00:22:27.275 "r_mbytes_per_sec": 0, 00:22:27.275 "w_mbytes_per_sec": 0 00:22:27.275 }, 00:22:27.275 "claimed": false, 00:22:27.275 "zoned": false, 00:22:27.275 "supported_io_types": { 00:22:27.275 "read": true, 00:22:27.275 "write": true, 00:22:27.275 "unmap": true, 00:22:27.275 "flush": false, 00:22:27.275 "reset": true, 00:22:27.275 "nvme_admin": false, 00:22:27.275 "nvme_io": false, 00:22:27.275 "nvme_io_md": false, 00:22:27.275 "write_zeroes": true, 00:22:27.275 "zcopy": false, 00:22:27.275 "get_zone_info": false, 00:22:27.275 "zone_management": false, 00:22:27.275 "zone_append": false, 00:22:27.275 "compare": false, 00:22:27.275 "compare_and_write": false, 00:22:27.275 "abort": false, 00:22:27.275 "seek_hole": true, 00:22:27.275 "seek_data": true, 00:22:27.275 "copy": false, 00:22:27.275 "nvme_iov_md": false 00:22:27.275 }, 00:22:27.275 "driver_specific": { 00:22:27.275 "lvol": { 00:22:27.275 "lvol_store_uuid": "3f29c725-b252-4027-b3d1-0b2c0b387695", 00:22:27.275 "base_bdev": "nvme0n1", 00:22:27.275 "thin_provision": true, 00:22:27.275 "num_allocated_clusters": 0, 00:22:27.275 "snapshot": false, 00:22:27.275 "clone": false, 00:22:27.275 "esnap_clone": false 00:22:27.275 } 00:22:27.275 } 00:22:27.275 } 00:22:27.275 ]' 00:22:27.276 14:03:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:27.276 14:03:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:22:27.276 14:03:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:27.545 14:03:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:27.545 14:03:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:27.545 14:03:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:22:27.545 14:03:51 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:22:27.545 14:03:51 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:27.805 14:03:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:22:27.805 14:03:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size 845c8a29-d0be-446d-a321-69836b68eda7 00:22:27.805 14:03:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=845c8a29-d0be-446d-a321-69836b68eda7 00:22:27.805 14:03:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:27.805 14:03:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:22:27.805 14:03:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:22:27.805 14:03:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 845c8a29-d0be-446d-a321-69836b68eda7 00:22:28.063 14:03:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:28.063 { 00:22:28.063 "name": "845c8a29-d0be-446d-a321-69836b68eda7", 00:22:28.063 "aliases": [ 00:22:28.063 "lvs/nvme0n1p0" 00:22:28.063 ], 00:22:28.063 "product_name": "Logical Volume", 00:22:28.063 "block_size": 4096, 00:22:28.063 "num_blocks": 26476544, 00:22:28.063 "uuid": "845c8a29-d0be-446d-a321-69836b68eda7", 00:22:28.063 "assigned_rate_limits": { 00:22:28.063 "rw_ios_per_sec": 0, 00:22:28.063 "rw_mbytes_per_sec": 0, 00:22:28.063 "r_mbytes_per_sec": 0, 00:22:28.063 "w_mbytes_per_sec": 0 00:22:28.063 }, 00:22:28.063 "claimed": false, 00:22:28.063 "zoned": false, 00:22:28.063 "supported_io_types": { 00:22:28.063 "read": true, 00:22:28.063 "write": true, 00:22:28.063 "unmap": true, 00:22:28.063 "flush": false, 00:22:28.063 "reset": true, 00:22:28.063 "nvme_admin": false, 00:22:28.063 "nvme_io": false, 00:22:28.063 "nvme_io_md": false, 00:22:28.063 "write_zeroes": true, 00:22:28.063 "zcopy": false, 00:22:28.063 "get_zone_info": false, 00:22:28.063 "zone_management": false, 00:22:28.063 "zone_append": false, 00:22:28.063 "compare": false, 00:22:28.063 "compare_and_write": false, 00:22:28.063 "abort": false, 00:22:28.063 "seek_hole": true, 00:22:28.063 "seek_data": true, 00:22:28.063 "copy": false, 00:22:28.063 "nvme_iov_md": false 00:22:28.063 }, 00:22:28.063 "driver_specific": { 00:22:28.063 "lvol": { 00:22:28.063 "lvol_store_uuid": "3f29c725-b252-4027-b3d1-0b2c0b387695", 00:22:28.063 "base_bdev": "nvme0n1", 00:22:28.063 "thin_provision": true, 00:22:28.063 "num_allocated_clusters": 0, 00:22:28.063 "snapshot": false, 00:22:28.063 "clone": false, 00:22:28.063 "esnap_clone": false 00:22:28.063 } 00:22:28.063 } 00:22:28.063 } 00:22:28.063 ]' 00:22:28.063 14:03:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:28.064 14:03:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:22:28.064 14:03:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:28.064 14:03:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:28.064 14:03:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:28.064 14:03:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:22:28.064 14:03:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:22:28.064 14:03:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 845c8a29-d0be-446d-a321-69836b68eda7 -c nvc0n1p0 --l2p_dram_limit 20 00:22:28.322 [2024-07-15 14:03:52.702183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.322 [2024-07-15 14:03:52.702241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:28.322 [2024-07-15 14:03:52.702274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:28.322 [2024-07-15 14:03:52.702290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.322 [2024-07-15 14:03:52.702395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.322 [2024-07-15 14:03:52.702426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:28.322 [2024-07-15 14:03:52.702454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:28.322 [2024-07-15 14:03:52.702479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.322 [2024-07-15 14:03:52.702530] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:28.322 [2024-07-15 14:03:52.703548] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:28.322 [2024-07-15 14:03:52.703597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.322 [2024-07-15 14:03:52.703615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:28.322 [2024-07-15 14:03:52.703631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.078 ms 00:22:28.322 [2024-07-15 14:03:52.703643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.322 [2024-07-15 14:03:52.703775] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 155e3ddb-7dd2-4e8c-af96-3963cbfe0f5f 00:22:28.322 [2024-07-15 14:03:52.704788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.322 [2024-07-15 14:03:52.704823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:28.322 [2024-07-15 14:03:52.704839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:28.322 [2024-07-15 14:03:52.704856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.322 [2024-07-15 14:03:52.709435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.322 [2024-07-15 14:03:52.709488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:28.322 [2024-07-15 14:03:52.709507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.528 ms 00:22:28.322 [2024-07-15 14:03:52.709521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.322 [2024-07-15 14:03:52.709646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.322 [2024-07-15 14:03:52.709672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:28.322 [2024-07-15 14:03:52.709691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:22:28.322 [2024-07-15 14:03:52.709708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.322 [2024-07-15 14:03:52.709776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.322 [2024-07-15 14:03:52.709797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:28.322 [2024-07-15 14:03:52.709810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:28.322 [2024-07-15 14:03:52.709824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.322 [2024-07-15 14:03:52.709855] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:28.322 [2024-07-15 14:03:52.714534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.322 [2024-07-15 14:03:52.714692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:28.322 [2024-07-15 14:03:52.714837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.685 ms 00:22:28.322 [2024-07-15 14:03:52.714981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.322 [2024-07-15 14:03:52.715079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.322 [2024-07-15 14:03:52.715140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:28.322 [2024-07-15 14:03:52.715265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:28.322 [2024-07-15 14:03:52.715422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.322 [2024-07-15 14:03:52.715612] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:28.322 [2024-07-15 14:03:52.715893] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:28.322 [2024-07-15 14:03:52.716075] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:28.322 [2024-07-15 14:03:52.716247] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:28.322 [2024-07-15 14:03:52.716477] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:28.322 [2024-07-15 14:03:52.716630] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:28.322 [2024-07-15 14:03:52.716819] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:28.322 [2024-07-15 14:03:52.716985] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:28.322 [2024-07-15 14:03:52.717111] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:28.322 [2024-07-15 14:03:52.717251] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:28.322 [2024-07-15 14:03:52.717330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.322 [2024-07-15 14:03:52.717380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:28.322 [2024-07-15 14:03:52.717487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.721 ms 00:22:28.322 [2024-07-15 14:03:52.717514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.322 [2024-07-15 14:03:52.717616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.322 [2024-07-15 14:03:52.717637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:28.322 [2024-07-15 14:03:52.717653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:28.322 [2024-07-15 14:03:52.717664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.322 [2024-07-15 14:03:52.717771] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:28.322 [2024-07-15 14:03:52.717794] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:28.322 [2024-07-15 14:03:52.717810] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:28.322 [2024-07-15 14:03:52.717822] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.322 [2024-07-15 14:03:52.717839] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:28.322 [2024-07-15 14:03:52.717851] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:28.322 [2024-07-15 14:03:52.717864] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:28.322 [2024-07-15 14:03:52.717875] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:28.322 [2024-07-15 14:03:52.717889] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:28.322 [2024-07-15 14:03:52.717900] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:28.322 [2024-07-15 14:03:52.717912] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:28.322 [2024-07-15 14:03:52.717924] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:28.322 [2024-07-15 14:03:52.717936] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:28.322 [2024-07-15 14:03:52.717947] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:28.322 [2024-07-15 14:03:52.717962] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:28.322 [2024-07-15 14:03:52.717973] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.322 [2024-07-15 14:03:52.717988] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:28.323 [2024-07-15 14:03:52.717999] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:28.323 [2024-07-15 14:03:52.718025] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.323 [2024-07-15 14:03:52.718036] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:28.323 [2024-07-15 14:03:52.718049] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:28.323 [2024-07-15 14:03:52.718060] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:28.323 [2024-07-15 14:03:52.718073] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:28.323 [2024-07-15 14:03:52.718084] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:28.323 [2024-07-15 14:03:52.718097] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:28.323 [2024-07-15 14:03:52.718107] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:28.323 [2024-07-15 14:03:52.718120] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:28.323 [2024-07-15 14:03:52.718131] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:28.323 [2024-07-15 14:03:52.718143] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:28.323 [2024-07-15 14:03:52.718154] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:28.323 [2024-07-15 14:03:52.718167] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:28.323 [2024-07-15 14:03:52.718179] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:28.323 [2024-07-15 14:03:52.718194] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:28.323 [2024-07-15 14:03:52.718205] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:28.323 [2024-07-15 14:03:52.718218] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:28.323 [2024-07-15 14:03:52.718229] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:28.323 [2024-07-15 14:03:52.718242] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:28.323 [2024-07-15 14:03:52.718252] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:28.323 [2024-07-15 14:03:52.718280] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:28.323 [2024-07-15 14:03:52.718294] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.323 [2024-07-15 14:03:52.718322] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:28.323 [2024-07-15 14:03:52.718335] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:28.323 [2024-07-15 14:03:52.718348] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.323 [2024-07-15 14:03:52.718359] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:28.323 [2024-07-15 14:03:52.718373] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:28.323 [2024-07-15 14:03:52.718385] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:28.323 [2024-07-15 14:03:52.718399] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.323 [2024-07-15 14:03:52.718411] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:28.323 [2024-07-15 14:03:52.718426] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:28.323 [2024-07-15 14:03:52.718437] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:28.323 [2024-07-15 14:03:52.718450] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:28.323 [2024-07-15 14:03:52.718461] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:28.323 [2024-07-15 14:03:52.718474] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:28.323 [2024-07-15 14:03:52.718491] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:28.323 [2024-07-15 14:03:52.718511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:28.323 [2024-07-15 14:03:52.718525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:28.323 [2024-07-15 14:03:52.718544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:28.323 [2024-07-15 14:03:52.718556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:28.323 [2024-07-15 14:03:52.718570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:28.323 [2024-07-15 14:03:52.718582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:28.323 [2024-07-15 14:03:52.718596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:28.323 [2024-07-15 14:03:52.718607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:28.323 [2024-07-15 14:03:52.718622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:28.323 [2024-07-15 14:03:52.718634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:28.323 [2024-07-15 14:03:52.718652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:28.323 [2024-07-15 14:03:52.718664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:28.323 [2024-07-15 14:03:52.718678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:28.323 [2024-07-15 14:03:52.718689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:28.323 [2024-07-15 14:03:52.718703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:28.323 [2024-07-15 14:03:52.718715] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:28.323 [2024-07-15 14:03:52.718730] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:28.323 [2024-07-15 14:03:52.718743] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:28.323 [2024-07-15 14:03:52.718757] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:28.323 [2024-07-15 14:03:52.718769] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:28.323 [2024-07-15 14:03:52.718783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:28.323 [2024-07-15 14:03:52.718796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.323 [2024-07-15 14:03:52.718810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:28.323 [2024-07-15 14:03:52.718825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.098 ms 00:22:28.323 [2024-07-15 14:03:52.718838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.323 [2024-07-15 14:03:52.718884] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:28.323 [2024-07-15 14:03:52.718906] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:30.220 [2024-07-15 14:03:54.625753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.220 [2024-07-15 14:03:54.626066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:30.220 [2024-07-15 14:03:54.626211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1906.878 ms 00:22:30.220 [2024-07-15 14:03:54.626287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.220 [2024-07-15 14:03:54.674540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.220 [2024-07-15 14:03:54.674811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:30.220 [2024-07-15 14:03:54.674957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.891 ms 00:22:30.220 [2024-07-15 14:03:54.675016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.220 [2024-07-15 14:03:54.675296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.220 [2024-07-15 14:03:54.675390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:30.220 [2024-07-15 14:03:54.675541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:30.220 [2024-07-15 14:03:54.675606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.220 [2024-07-15 14:03:54.713955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.220 [2024-07-15 14:03:54.714211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:30.220 [2024-07-15 14:03:54.714388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.232 ms 00:22:30.220 [2024-07-15 14:03:54.714449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.220 [2024-07-15 14:03:54.714591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.220 [2024-07-15 14:03:54.714732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:30.220 [2024-07-15 14:03:54.714861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:30.220 [2024-07-15 14:03:54.714981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.220 [2024-07-15 14:03:54.715428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.220 [2024-07-15 14:03:54.715575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:30.220 [2024-07-15 14:03:54.715702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:22:30.220 [2024-07-15 14:03:54.715761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.220 [2024-07-15 14:03:54.715969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.220 [2024-07-15 14:03:54.716037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:30.220 [2024-07-15 14:03:54.716147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:22:30.220 [2024-07-15 14:03:54.716264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.220 [2024-07-15 14:03:54.732267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.220 [2024-07-15 14:03:54.732471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:30.220 [2024-07-15 14:03:54.732603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.852 ms 00:22:30.220 [2024-07-15 14:03:54.732663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.220 [2024-07-15 14:03:54.746122] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:22:30.220 [2024-07-15 14:03:54.751228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.220 [2024-07-15 14:03:54.751271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:30.220 [2024-07-15 14:03:54.751295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.321 ms 00:22:30.220 [2024-07-15 14:03:54.751326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.478 [2024-07-15 14:03:54.807325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.478 [2024-07-15 14:03:54.807418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:30.478 [2024-07-15 14:03:54.807444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.937 ms 00:22:30.478 [2024-07-15 14:03:54.807458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.478 [2024-07-15 14:03:54.807699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.478 [2024-07-15 14:03:54.807720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:30.478 [2024-07-15 14:03:54.807739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:22:30.478 [2024-07-15 14:03:54.807752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.478 [2024-07-15 14:03:54.841871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.478 [2024-07-15 14:03:54.841972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:30.478 [2024-07-15 14:03:54.842016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.020 ms 00:22:30.478 [2024-07-15 14:03:54.842040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.478 [2024-07-15 14:03:54.874149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.478 [2024-07-15 14:03:54.874211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:30.478 [2024-07-15 14:03:54.874236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.025 ms 00:22:30.478 [2024-07-15 14:03:54.874249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.478 [2024-07-15 14:03:54.875011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.478 [2024-07-15 14:03:54.875047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:30.478 [2024-07-15 14:03:54.875068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:22:30.478 [2024-07-15 14:03:54.875080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.478 [2024-07-15 14:03:54.960192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.478 [2024-07-15 14:03:54.960266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:30.478 [2024-07-15 14:03:54.960295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.042 ms 00:22:30.478 [2024-07-15 14:03:54.960327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.478 [2024-07-15 14:03:54.992087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.478 [2024-07-15 14:03:54.992144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:30.478 [2024-07-15 14:03:54.992166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.701 ms 00:22:30.478 [2024-07-15 14:03:54.992180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.478 [2024-07-15 14:03:55.023872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.478 [2024-07-15 14:03:55.023931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:30.478 [2024-07-15 14:03:55.023955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.635 ms 00:22:30.478 [2024-07-15 14:03:55.023967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.736 [2024-07-15 14:03:55.055686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.736 [2024-07-15 14:03:55.055741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:30.736 [2024-07-15 14:03:55.055763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.659 ms 00:22:30.736 [2024-07-15 14:03:55.055775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.736 [2024-07-15 14:03:55.055842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.736 [2024-07-15 14:03:55.055861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:30.736 [2024-07-15 14:03:55.055879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:30.736 [2024-07-15 14:03:55.055891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.736 [2024-07-15 14:03:55.056015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.736 [2024-07-15 14:03:55.056036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:30.736 [2024-07-15 14:03:55.056051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:30.736 [2024-07-15 14:03:55.056063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.736 [2024-07-15 14:03:55.057109] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2354.432 ms, result 0 00:22:30.736 { 00:22:30.736 "name": "ftl0", 00:22:30.736 "uuid": "155e3ddb-7dd2-4e8c-af96-3963cbfe0f5f" 00:22:30.736 } 00:22:30.736 14:03:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:22:30.736 14:03:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:22:30.736 14:03:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:22:30.993 14:03:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:22:30.993 [2024-07-15 14:03:55.437657] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:22:30.993 I/O size of 69632 is greater than zero copy threshold (65536). 00:22:30.993 Zero copy mechanism will not be used. 00:22:30.993 Running I/O for 4 seconds... 00:22:35.170 00:22:35.170 Latency(us) 00:22:35.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.170 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:22:35.170 ftl0 : 4.00 2154.13 143.05 0.00 0.00 488.38 223.42 4676.89 00:22:35.170 =================================================================================================================== 00:22:35.170 Total : 2154.13 143.05 0.00 0.00 488.38 223.42 4676.89 00:22:35.170 [2024-07-15 14:03:59.447452] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:22:35.170 0 00:22:35.171 14:03:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:22:35.171 [2024-07-15 14:03:59.570701] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:22:35.171 Running I/O for 4 seconds... 00:22:39.356 00:22:39.356 Latency(us) 00:22:39.356 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.356 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:22:39.356 ftl0 : 4.03 6993.95 27.32 0.00 0.00 18232.31 381.67 35031.97 00:22:39.356 =================================================================================================================== 00:22:39.356 Total : 6993.95 27.32 0.00 0.00 18232.31 0.00 35031.97 00:22:39.356 [2024-07-15 14:04:03.611195] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:22:39.356 0 00:22:39.356 14:04:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:22:39.356 [2024-07-15 14:04:03.748858] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:22:39.356 Running I/O for 4 seconds... 00:22:43.539 00:22:43.539 Latency(us) 00:22:43.539 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.539 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:43.539 Verification LBA range: start 0x0 length 0x1400000 00:22:43.539 ftl0 : 4.01 5585.35 21.82 0.00 0.00 22839.06 381.67 53858.68 00:22:43.539 =================================================================================================================== 00:22:43.539 Total : 5585.35 21.82 0.00 0.00 22839.06 0.00 53858.68 00:22:43.539 0 00:22:43.539 [2024-07-15 14:04:07.777702] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:22:43.539 14:04:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:22:43.539 [2024-07-15 14:04:08.073200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.539 [2024-07-15 14:04:08.073273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:43.539 [2024-07-15 14:04:08.073300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:43.539 [2024-07-15 14:04:08.073336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.539 [2024-07-15 14:04:08.073379] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:43.539 [2024-07-15 14:04:08.076717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.539 [2024-07-15 14:04:08.076759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:43.539 [2024-07-15 14:04:08.076777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.313 ms 00:22:43.539 [2024-07-15 14:04:08.076793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.539 [2024-07-15 14:04:08.078342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.539 [2024-07-15 14:04:08.078391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:43.539 [2024-07-15 14:04:08.078411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.518 ms 00:22:43.539 [2024-07-15 14:04:08.078425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.796 [2024-07-15 14:04:08.256949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.796 [2024-07-15 14:04:08.257046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:43.796 [2024-07-15 14:04:08.257073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 178.494 ms 00:22:43.796 [2024-07-15 14:04:08.257092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.796 [2024-07-15 14:04:08.263828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.797 [2024-07-15 14:04:08.263871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:43.797 [2024-07-15 14:04:08.263889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.689 ms 00:22:43.797 [2024-07-15 14:04:08.263903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.797 [2024-07-15 14:04:08.294915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.797 [2024-07-15 14:04:08.294982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:43.797 [2024-07-15 14:04:08.295003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.920 ms 00:22:43.797 [2024-07-15 14:04:08.295019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.797 [2024-07-15 14:04:08.313478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.797 [2024-07-15 14:04:08.313533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:43.797 [2024-07-15 14:04:08.313553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.408 ms 00:22:43.797 [2024-07-15 14:04:08.313571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.797 [2024-07-15 14:04:08.313752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.797 [2024-07-15 14:04:08.313780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:43.797 [2024-07-15 14:04:08.313794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:22:43.797 [2024-07-15 14:04:08.313811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.056 [2024-07-15 14:04:08.344972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.056 [2024-07-15 14:04:08.345045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:44.056 [2024-07-15 14:04:08.345066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.137 ms 00:22:44.056 [2024-07-15 14:04:08.345081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.056 [2024-07-15 14:04:08.375913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.056 [2024-07-15 14:04:08.375966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:44.056 [2024-07-15 14:04:08.375986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.775 ms 00:22:44.056 [2024-07-15 14:04:08.376000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.056 [2024-07-15 14:04:08.406416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.056 [2024-07-15 14:04:08.406466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:44.056 [2024-07-15 14:04:08.406484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.369 ms 00:22:44.056 [2024-07-15 14:04:08.406498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.056 [2024-07-15 14:04:08.436916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.056 [2024-07-15 14:04:08.436971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:44.056 [2024-07-15 14:04:08.436989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.308 ms 00:22:44.056 [2024-07-15 14:04:08.437007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.056 [2024-07-15 14:04:08.437055] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:44.056 [2024-07-15 14:04:08.437082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.437999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.438013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.438026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.438040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.438052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:44.056 [2024-07-15 14:04:08.438066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:44.057 [2024-07-15 14:04:08.438591] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:44.057 [2024-07-15 14:04:08.438604] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 155e3ddb-7dd2-4e8c-af96-3963cbfe0f5f 00:22:44.057 [2024-07-15 14:04:08.438618] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:44.057 [2024-07-15 14:04:08.438629] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:44.057 [2024-07-15 14:04:08.438642] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:44.057 [2024-07-15 14:04:08.438655] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:44.057 [2024-07-15 14:04:08.438671] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:44.057 [2024-07-15 14:04:08.438683] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:44.057 [2024-07-15 14:04:08.438696] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:44.057 [2024-07-15 14:04:08.438707] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:44.057 [2024-07-15 14:04:08.438722] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:44.057 [2024-07-15 14:04:08.438735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.057 [2024-07-15 14:04:08.438749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:44.057 [2024-07-15 14:04:08.438761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.682 ms 00:22:44.057 [2024-07-15 14:04:08.438775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.057 [2024-07-15 14:04:08.455562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.057 [2024-07-15 14:04:08.455643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:44.057 [2024-07-15 14:04:08.455668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.721 ms 00:22:44.057 [2024-07-15 14:04:08.455683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.057 [2024-07-15 14:04:08.456141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.057 [2024-07-15 14:04:08.456167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:44.057 [2024-07-15 14:04:08.456181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:22:44.057 [2024-07-15 14:04:08.456196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.057 [2024-07-15 14:04:08.495970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.057 [2024-07-15 14:04:08.496049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:44.057 [2024-07-15 14:04:08.496070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.057 [2024-07-15 14:04:08.496087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.057 [2024-07-15 14:04:08.496170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.057 [2024-07-15 14:04:08.496188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:44.057 [2024-07-15 14:04:08.496201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.057 [2024-07-15 14:04:08.496215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.057 [2024-07-15 14:04:08.496364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.057 [2024-07-15 14:04:08.496391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:44.057 [2024-07-15 14:04:08.496419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.057 [2024-07-15 14:04:08.496436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.057 [2024-07-15 14:04:08.496461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.057 [2024-07-15 14:04:08.496478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:44.057 [2024-07-15 14:04:08.496491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.057 [2024-07-15 14:04:08.496504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.057 [2024-07-15 14:04:08.594852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.057 [2024-07-15 14:04:08.594930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:44.057 [2024-07-15 14:04:08.594953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.057 [2024-07-15 14:04:08.594971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.315 [2024-07-15 14:04:08.679695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.315 [2024-07-15 14:04:08.679773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:44.315 [2024-07-15 14:04:08.679795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.315 [2024-07-15 14:04:08.679810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.315 [2024-07-15 14:04:08.679919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.315 [2024-07-15 14:04:08.679944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:44.315 [2024-07-15 14:04:08.679958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.315 [2024-07-15 14:04:08.679972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.315 [2024-07-15 14:04:08.680034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.315 [2024-07-15 14:04:08.680056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:44.315 [2024-07-15 14:04:08.680069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.315 [2024-07-15 14:04:08.680083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.315 [2024-07-15 14:04:08.680206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.315 [2024-07-15 14:04:08.680231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:44.315 [2024-07-15 14:04:08.680244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.315 [2024-07-15 14:04:08.680261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.315 [2024-07-15 14:04:08.680339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.315 [2024-07-15 14:04:08.680363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:44.316 [2024-07-15 14:04:08.680377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.316 [2024-07-15 14:04:08.680402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.316 [2024-07-15 14:04:08.680450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.316 [2024-07-15 14:04:08.680469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:44.316 [2024-07-15 14:04:08.680482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.316 [2024-07-15 14:04:08.680496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.316 [2024-07-15 14:04:08.680553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.316 [2024-07-15 14:04:08.680574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:44.316 [2024-07-15 14:04:08.680587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.316 [2024-07-15 14:04:08.680601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.316 [2024-07-15 14:04:08.680748] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 607.520 ms, result 0 00:22:44.316 true 00:22:44.316 14:04:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 80551 00:22:44.316 14:04:08 ftl.ftl_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 80551 ']' 00:22:44.316 14:04:08 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # kill -0 80551 00:22:44.316 14:04:08 ftl.ftl_bdevperf -- common/autotest_common.sh@953 -- # uname 00:22:44.316 14:04:08 ftl.ftl_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:44.316 14:04:08 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80551 00:22:44.316 killing process with pid 80551 00:22:44.316 Received shutdown signal, test time was about 4.000000 seconds 00:22:44.316 00:22:44.316 Latency(us) 00:22:44.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.316 =================================================================================================================== 00:22:44.316 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:44.316 14:04:08 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:44.316 14:04:08 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:44.316 14:04:08 ftl.ftl_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80551' 00:22:44.316 14:04:08 ftl.ftl_bdevperf -- common/autotest_common.sh@967 -- # kill 80551 00:22:44.316 14:04:08 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # wait 80551 00:22:45.687 14:04:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:22:45.687 14:04:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:22:45.687 14:04:09 ftl.ftl_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:45.687 14:04:09 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:45.687 Remove shared memory files 00:22:45.687 14:04:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:22:45.687 14:04:09 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:45.687 14:04:09 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:22:45.687 14:04:09 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:22:45.687 14:04:09 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:22:45.687 14:04:09 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:45.687 14:04:09 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:22:45.687 ************************************ 00:22:45.687 END TEST ftl_bdevperf 00:22:45.687 ************************************ 00:22:45.687 00:22:45.687 real 0m22.134s 00:22:45.687 user 0m25.876s 00:22:45.687 sys 0m1.037s 00:22:45.687 14:04:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:45.687 14:04:09 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:45.687 14:04:09 ftl -- common/autotest_common.sh@1142 -- # return 0 00:22:45.687 14:04:09 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:22:45.687 14:04:09 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:45.687 14:04:09 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:45.687 14:04:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:45.687 ************************************ 00:22:45.687 START TEST ftl_trim 00:22:45.688 ************************************ 00:22:45.688 14:04:09 ftl.ftl_trim -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:22:45.688 * Looking for test storage... 00:22:45.688 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=80893 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:22:45.688 14:04:10 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 80893 00:22:45.688 14:04:10 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 80893 ']' 00:22:45.688 14:04:10 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.688 14:04:10 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:45.688 14:04:10 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.688 14:04:10 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:45.688 14:04:10 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:45.688 [2024-07-15 14:04:10.197413] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:22:45.688 [2024-07-15 14:04:10.197836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80893 ] 00:22:45.946 [2024-07-15 14:04:10.397528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:46.204 [2024-07-15 14:04:10.609948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.204 [2024-07-15 14:04:10.610013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.204 [2024-07-15 14:04:10.610018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.837 14:04:11 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:46.837 14:04:11 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:22:46.837 14:04:11 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:46.837 14:04:11 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:22:46.837 14:04:11 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:46.837 14:04:11 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:22:46.837 14:04:11 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:22:46.837 14:04:11 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:47.402 14:04:11 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:47.402 14:04:11 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:22:47.402 14:04:11 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:47.402 14:04:11 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:22:47.402 14:04:11 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:47.402 14:04:11 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:22:47.402 14:04:11 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:22:47.402 14:04:11 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:47.660 14:04:11 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:47.660 { 00:22:47.660 "name": "nvme0n1", 00:22:47.660 "aliases": [ 00:22:47.660 "8f20384a-9b65-4f5e-a087-efd8c97692ee" 00:22:47.660 ], 00:22:47.660 "product_name": "NVMe disk", 00:22:47.660 "block_size": 4096, 00:22:47.660 "num_blocks": 1310720, 00:22:47.660 "uuid": "8f20384a-9b65-4f5e-a087-efd8c97692ee", 00:22:47.660 "assigned_rate_limits": { 00:22:47.660 "rw_ios_per_sec": 0, 00:22:47.660 "rw_mbytes_per_sec": 0, 00:22:47.660 "r_mbytes_per_sec": 0, 00:22:47.660 "w_mbytes_per_sec": 0 00:22:47.660 }, 00:22:47.660 "claimed": true, 00:22:47.660 "claim_type": "read_many_write_one", 00:22:47.660 "zoned": false, 00:22:47.660 "supported_io_types": { 00:22:47.660 "read": true, 00:22:47.660 "write": true, 00:22:47.660 "unmap": true, 00:22:47.660 "flush": true, 00:22:47.660 "reset": true, 00:22:47.660 "nvme_admin": true, 00:22:47.660 "nvme_io": true, 00:22:47.660 "nvme_io_md": false, 00:22:47.661 "write_zeroes": true, 00:22:47.661 "zcopy": false, 00:22:47.661 "get_zone_info": false, 00:22:47.661 "zone_management": false, 00:22:47.661 "zone_append": false, 00:22:47.661 "compare": true, 00:22:47.661 "compare_and_write": false, 00:22:47.661 "abort": true, 00:22:47.661 "seek_hole": false, 00:22:47.661 "seek_data": false, 00:22:47.661 "copy": true, 00:22:47.661 "nvme_iov_md": false 00:22:47.661 }, 00:22:47.661 "driver_specific": { 00:22:47.661 "nvme": [ 00:22:47.661 { 00:22:47.661 "pci_address": "0000:00:11.0", 00:22:47.661 "trid": { 00:22:47.661 "trtype": "PCIe", 00:22:47.661 "traddr": "0000:00:11.0" 00:22:47.661 }, 00:22:47.661 "ctrlr_data": { 00:22:47.661 "cntlid": 0, 00:22:47.661 "vendor_id": "0x1b36", 00:22:47.661 "model_number": "QEMU NVMe Ctrl", 00:22:47.661 "serial_number": "12341", 00:22:47.661 "firmware_revision": "8.0.0", 00:22:47.661 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:47.661 "oacs": { 00:22:47.661 "security": 0, 00:22:47.661 "format": 1, 00:22:47.661 "firmware": 0, 00:22:47.661 "ns_manage": 1 00:22:47.661 }, 00:22:47.661 "multi_ctrlr": false, 00:22:47.661 "ana_reporting": false 00:22:47.661 }, 00:22:47.661 "vs": { 00:22:47.661 "nvme_version": "1.4" 00:22:47.661 }, 00:22:47.661 "ns_data": { 00:22:47.661 "id": 1, 00:22:47.661 "can_share": false 00:22:47.661 } 00:22:47.661 } 00:22:47.661 ], 00:22:47.661 "mp_policy": "active_passive" 00:22:47.661 } 00:22:47.661 } 00:22:47.661 ]' 00:22:47.661 14:04:12 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:47.661 14:04:12 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:22:47.661 14:04:12 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:47.661 14:04:12 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:47.661 14:04:12 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:47.661 14:04:12 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:22:47.661 14:04:12 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:22:47.661 14:04:12 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:47.661 14:04:12 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:22:47.661 14:04:12 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:47.661 14:04:12 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:47.918 14:04:12 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=3f29c725-b252-4027-b3d1-0b2c0b387695 00:22:47.918 14:04:12 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:22:47.918 14:04:12 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3f29c725-b252-4027-b3d1-0b2c0b387695 00:22:48.176 14:04:12 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:48.433 14:04:12 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=71865862-dc60-489e-9157-e3790f17938f 00:22:48.433 14:04:12 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 71865862-dc60-489e-9157-e3790f17938f 00:22:48.997 14:04:13 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=0e1d315a-e7d7-4154-bdf3-06b7f0e80dbd 00:22:48.997 14:04:13 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0e1d315a-e7d7-4154-bdf3-06b7f0e80dbd 00:22:48.997 14:04:13 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:22:48.997 14:04:13 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:48.997 14:04:13 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=0e1d315a-e7d7-4154-bdf3-06b7f0e80dbd 00:22:48.997 14:04:13 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:22:48.997 14:04:13 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 0e1d315a-e7d7-4154-bdf3-06b7f0e80dbd 00:22:48.997 14:04:13 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=0e1d315a-e7d7-4154-bdf3-06b7f0e80dbd 00:22:48.997 14:04:13 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:48.997 14:04:13 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:22:48.997 14:04:13 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:22:48.997 14:04:13 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0e1d315a-e7d7-4154-bdf3-06b7f0e80dbd 00:22:49.254 14:04:13 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:49.254 { 00:22:49.254 "name": "0e1d315a-e7d7-4154-bdf3-06b7f0e80dbd", 00:22:49.254 "aliases": [ 00:22:49.254 "lvs/nvme0n1p0" 00:22:49.254 ], 00:22:49.254 "product_name": "Logical Volume", 00:22:49.254 "block_size": 4096, 00:22:49.254 "num_blocks": 26476544, 00:22:49.254 "uuid": "0e1d315a-e7d7-4154-bdf3-06b7f0e80dbd", 00:22:49.254 "assigned_rate_limits": { 00:22:49.254 "rw_ios_per_sec": 0, 00:22:49.254 "rw_mbytes_per_sec": 0, 00:22:49.254 "r_mbytes_per_sec": 0, 00:22:49.254 "w_mbytes_per_sec": 0 00:22:49.254 }, 00:22:49.254 "claimed": false, 00:22:49.254 "zoned": false, 00:22:49.254 "supported_io_types": { 00:22:49.254 "read": true, 00:22:49.254 "write": true, 00:22:49.254 "unmap": true, 00:22:49.254 "flush": false, 00:22:49.254 "reset": true, 00:22:49.254 "nvme_admin": false, 00:22:49.254 "nvme_io": false, 00:22:49.254 "nvme_io_md": false, 00:22:49.254 "write_zeroes": true, 00:22:49.254 "zcopy": false, 00:22:49.254 "get_zone_info": false, 00:22:49.254 "zone_management": false, 00:22:49.254 "zone_append": false, 00:22:49.254 "compare": false, 00:22:49.254 "compare_and_write": false, 00:22:49.254 "abort": false, 00:22:49.254 "seek_hole": true, 00:22:49.254 "seek_data": true, 00:22:49.254 "copy": false, 00:22:49.254 "nvme_iov_md": false 00:22:49.254 }, 00:22:49.254 "driver_specific": { 00:22:49.254 "lvol": { 00:22:49.254 "lvol_store_uuid": "71865862-dc60-489e-9157-e3790f17938f", 00:22:49.254 "base_bdev": "nvme0n1", 00:22:49.254 "thin_provision": true, 00:22:49.254 "num_allocated_clusters": 0, 00:22:49.254 "snapshot": false, 00:22:49.254 "clone": false, 00:22:49.254 "esnap_clone": false 00:22:49.254 } 00:22:49.254 } 00:22:49.254 } 00:22:49.254 ]' 00:22:49.254 14:04:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:49.254 14:04:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:22:49.254 14:04:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:49.254 14:04:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:49.254 14:04:13 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:49.254 14:04:13 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:22:49.254 14:04:13 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:22:49.254 14:04:13 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:22:49.254 14:04:13 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:49.511 14:04:14 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:49.511 14:04:14 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:49.511 14:04:14 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 0e1d315a-e7d7-4154-bdf3-06b7f0e80dbd 00:22:49.511 14:04:14 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=0e1d315a-e7d7-4154-bdf3-06b7f0e80dbd 00:22:49.511 14:04:14 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:49.511 14:04:14 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:22:49.511 14:04:14 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:22:49.511 14:04:14 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0e1d315a-e7d7-4154-bdf3-06b7f0e80dbd 00:22:50.077 14:04:14 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:50.077 { 00:22:50.077 "name": "0e1d315a-e7d7-4154-bdf3-06b7f0e80dbd", 00:22:50.077 "aliases": [ 00:22:50.077 "lvs/nvme0n1p0" 00:22:50.077 ], 00:22:50.077 "product_name": "Logical Volume", 00:22:50.077 "block_size": 4096, 00:22:50.077 "num_blocks": 26476544, 00:22:50.077 "uuid": "0e1d315a-e7d7-4154-bdf3-06b7f0e80dbd", 00:22:50.077 "assigned_rate_limits": { 00:22:50.077 "rw_ios_per_sec": 0, 00:22:50.077 "rw_mbytes_per_sec": 0, 00:22:50.077 "r_mbytes_per_sec": 0, 00:22:50.077 "w_mbytes_per_sec": 0 00:22:50.077 }, 00:22:50.077 "claimed": false, 00:22:50.077 "zoned": false, 00:22:50.077 "supported_io_types": { 00:22:50.077 "read": true, 00:22:50.077 "write": true, 00:22:50.077 "unmap": true, 00:22:50.077 "flush": false, 00:22:50.077 "reset": true, 00:22:50.077 "nvme_admin": false, 00:22:50.077 "nvme_io": false, 00:22:50.077 "nvme_io_md": false, 00:22:50.077 "write_zeroes": true, 00:22:50.077 "zcopy": false, 00:22:50.077 "get_zone_info": false, 00:22:50.077 "zone_management": false, 00:22:50.077 "zone_append": false, 00:22:50.077 "compare": false, 00:22:50.077 "compare_and_write": false, 00:22:50.077 "abort": false, 00:22:50.077 "seek_hole": true, 00:22:50.077 "seek_data": true, 00:22:50.077 "copy": false, 00:22:50.077 "nvme_iov_md": false 00:22:50.077 }, 00:22:50.077 "driver_specific": { 00:22:50.077 "lvol": { 00:22:50.077 "lvol_store_uuid": "71865862-dc60-489e-9157-e3790f17938f", 00:22:50.077 "base_bdev": "nvme0n1", 00:22:50.077 "thin_provision": true, 00:22:50.077 "num_allocated_clusters": 0, 00:22:50.077 "snapshot": false, 00:22:50.077 "clone": false, 00:22:50.077 "esnap_clone": false 00:22:50.077 } 00:22:50.077 } 00:22:50.077 } 00:22:50.077 ]' 00:22:50.077 14:04:14 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:50.077 14:04:14 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:22:50.077 14:04:14 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:50.077 14:04:14 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:50.077 14:04:14 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:50.077 14:04:14 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:22:50.077 14:04:14 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:22:50.077 14:04:14 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:50.334 14:04:14 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:22:50.334 14:04:14 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:22:50.334 14:04:14 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 0e1d315a-e7d7-4154-bdf3-06b7f0e80dbd 00:22:50.334 14:04:14 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=0e1d315a-e7d7-4154-bdf3-06b7f0e80dbd 00:22:50.334 14:04:14 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:50.334 14:04:14 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:22:50.334 14:04:14 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:22:50.334 14:04:14 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0e1d315a-e7d7-4154-bdf3-06b7f0e80dbd 00:22:50.592 14:04:15 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:50.593 { 00:22:50.593 "name": "0e1d315a-e7d7-4154-bdf3-06b7f0e80dbd", 00:22:50.593 "aliases": [ 00:22:50.593 "lvs/nvme0n1p0" 00:22:50.593 ], 00:22:50.593 "product_name": "Logical Volume", 00:22:50.593 "block_size": 4096, 00:22:50.593 "num_blocks": 26476544, 00:22:50.593 "uuid": "0e1d315a-e7d7-4154-bdf3-06b7f0e80dbd", 00:22:50.593 "assigned_rate_limits": { 00:22:50.593 "rw_ios_per_sec": 0, 00:22:50.593 "rw_mbytes_per_sec": 0, 00:22:50.593 "r_mbytes_per_sec": 0, 00:22:50.593 "w_mbytes_per_sec": 0 00:22:50.593 }, 00:22:50.593 "claimed": false, 00:22:50.593 "zoned": false, 00:22:50.593 "supported_io_types": { 00:22:50.593 "read": true, 00:22:50.593 "write": true, 00:22:50.593 "unmap": true, 00:22:50.593 "flush": false, 00:22:50.593 "reset": true, 00:22:50.593 "nvme_admin": false, 00:22:50.593 "nvme_io": false, 00:22:50.593 "nvme_io_md": false, 00:22:50.593 "write_zeroes": true, 00:22:50.593 "zcopy": false, 00:22:50.593 "get_zone_info": false, 00:22:50.593 "zone_management": false, 00:22:50.593 "zone_append": false, 00:22:50.593 "compare": false, 00:22:50.593 "compare_and_write": false, 00:22:50.593 "abort": false, 00:22:50.593 "seek_hole": true, 00:22:50.593 "seek_data": true, 00:22:50.593 "copy": false, 00:22:50.593 "nvme_iov_md": false 00:22:50.593 }, 00:22:50.593 "driver_specific": { 00:22:50.593 "lvol": { 00:22:50.593 "lvol_store_uuid": "71865862-dc60-489e-9157-e3790f17938f", 00:22:50.593 "base_bdev": "nvme0n1", 00:22:50.593 "thin_provision": true, 00:22:50.593 "num_allocated_clusters": 0, 00:22:50.593 "snapshot": false, 00:22:50.593 "clone": false, 00:22:50.593 "esnap_clone": false 00:22:50.593 } 00:22:50.593 } 00:22:50.593 } 00:22:50.593 ]' 00:22:50.593 14:04:15 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:50.593 14:04:15 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:22:50.593 14:04:15 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:50.851 14:04:15 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:50.851 14:04:15 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:50.851 14:04:15 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:22:50.851 14:04:15 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:22:50.851 14:04:15 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0e1d315a-e7d7-4154-bdf3-06b7f0e80dbd -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:22:51.108 [2024-07-15 14:04:15.425014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.108 [2024-07-15 14:04:15.425080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:51.108 [2024-07-15 14:04:15.425101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:51.108 [2024-07-15 14:04:15.425118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.108 [2024-07-15 14:04:15.428512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.108 [2024-07-15 14:04:15.428559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:51.108 [2024-07-15 14:04:15.428577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.345 ms 00:22:51.108 [2024-07-15 14:04:15.428591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.108 [2024-07-15 14:04:15.428756] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:51.108 [2024-07-15 14:04:15.429718] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:51.108 [2024-07-15 14:04:15.429759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.108 [2024-07-15 14:04:15.429780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:51.108 [2024-07-15 14:04:15.429794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.032 ms 00:22:51.108 [2024-07-15 14:04:15.429809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.108 [2024-07-15 14:04:15.430045] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 96438210-a730-46dd-94de-bf7d6eb48d99 00:22:51.108 [2024-07-15 14:04:15.431144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.108 [2024-07-15 14:04:15.431184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:51.108 [2024-07-15 14:04:15.431204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:51.108 [2024-07-15 14:04:15.431217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.108 [2024-07-15 14:04:15.436094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.108 [2024-07-15 14:04:15.436157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:51.108 [2024-07-15 14:04:15.436178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.787 ms 00:22:51.108 [2024-07-15 14:04:15.436191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.108 [2024-07-15 14:04:15.436413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.108 [2024-07-15 14:04:15.436455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:51.108 [2024-07-15 14:04:15.436484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:22:51.108 [2024-07-15 14:04:15.436505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.108 [2024-07-15 14:04:15.436589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.108 [2024-07-15 14:04:15.436626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:51.108 [2024-07-15 14:04:15.436665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:22:51.108 [2024-07-15 14:04:15.436687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.108 [2024-07-15 14:04:15.436764] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:51.108 [2024-07-15 14:04:15.441487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.108 [2024-07-15 14:04:15.441535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:51.108 [2024-07-15 14:04:15.441552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.747 ms 00:22:51.108 [2024-07-15 14:04:15.441568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.108 [2024-07-15 14:04:15.441658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.108 [2024-07-15 14:04:15.441682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:51.108 [2024-07-15 14:04:15.441697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:51.108 [2024-07-15 14:04:15.441711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.108 [2024-07-15 14:04:15.441746] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:51.108 [2024-07-15 14:04:15.441912] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:51.108 [2024-07-15 14:04:15.441940] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:51.108 [2024-07-15 14:04:15.441962] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:51.108 [2024-07-15 14:04:15.441978] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:51.108 [2024-07-15 14:04:15.441995] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:51.108 [2024-07-15 14:04:15.442007] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:51.108 [2024-07-15 14:04:15.442020] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:51.108 [2024-07-15 14:04:15.442036] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:51.108 [2024-07-15 14:04:15.442074] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:51.108 [2024-07-15 14:04:15.442087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.108 [2024-07-15 14:04:15.442102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:51.108 [2024-07-15 14:04:15.442115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:22:51.108 [2024-07-15 14:04:15.442129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.108 [2024-07-15 14:04:15.442235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.108 [2024-07-15 14:04:15.442254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:51.108 [2024-07-15 14:04:15.442268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:51.108 [2024-07-15 14:04:15.442282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.108 [2024-07-15 14:04:15.442454] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:51.108 [2024-07-15 14:04:15.442480] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:51.108 [2024-07-15 14:04:15.442494] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:51.108 [2024-07-15 14:04:15.442508] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:51.108 [2024-07-15 14:04:15.442521] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:51.108 [2024-07-15 14:04:15.442534] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:51.108 [2024-07-15 14:04:15.442546] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:51.108 [2024-07-15 14:04:15.442559] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:51.108 [2024-07-15 14:04:15.442570] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:51.108 [2024-07-15 14:04:15.442583] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:51.108 [2024-07-15 14:04:15.442594] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:51.108 [2024-07-15 14:04:15.442607] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:51.108 [2024-07-15 14:04:15.442617] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:51.109 [2024-07-15 14:04:15.442632] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:51.109 [2024-07-15 14:04:15.442649] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:51.109 [2024-07-15 14:04:15.442686] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:51.109 [2024-07-15 14:04:15.442698] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:51.109 [2024-07-15 14:04:15.442714] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:51.109 [2024-07-15 14:04:15.442724] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:51.109 [2024-07-15 14:04:15.442738] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:51.109 [2024-07-15 14:04:15.442749] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:51.109 [2024-07-15 14:04:15.442762] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:51.109 [2024-07-15 14:04:15.442774] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:51.109 [2024-07-15 14:04:15.442787] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:51.109 [2024-07-15 14:04:15.442797] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:51.109 [2024-07-15 14:04:15.442810] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:51.109 [2024-07-15 14:04:15.442821] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:51.109 [2024-07-15 14:04:15.442834] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:51.109 [2024-07-15 14:04:15.442845] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:51.109 [2024-07-15 14:04:15.442858] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:51.109 [2024-07-15 14:04:15.442869] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:51.109 [2024-07-15 14:04:15.442882] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:51.109 [2024-07-15 14:04:15.442893] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:51.109 [2024-07-15 14:04:15.442908] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:51.109 [2024-07-15 14:04:15.442919] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:51.109 [2024-07-15 14:04:15.442932] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:51.109 [2024-07-15 14:04:15.442943] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:51.109 [2024-07-15 14:04:15.442956] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:51.109 [2024-07-15 14:04:15.442967] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:51.109 [2024-07-15 14:04:15.442982] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:51.109 [2024-07-15 14:04:15.442993] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:51.109 [2024-07-15 14:04:15.443006] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:51.109 [2024-07-15 14:04:15.443018] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:51.109 [2024-07-15 14:04:15.443030] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:51.109 [2024-07-15 14:04:15.443042] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:51.109 [2024-07-15 14:04:15.443055] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:51.109 [2024-07-15 14:04:15.443067] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:51.109 [2024-07-15 14:04:15.443081] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:51.109 [2024-07-15 14:04:15.443093] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:51.109 [2024-07-15 14:04:15.443108] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:51.109 [2024-07-15 14:04:15.443121] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:51.109 [2024-07-15 14:04:15.443134] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:51.109 [2024-07-15 14:04:15.443145] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:51.109 [2024-07-15 14:04:15.443162] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:51.109 [2024-07-15 14:04:15.443179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:51.109 [2024-07-15 14:04:15.443195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:51.109 [2024-07-15 14:04:15.443207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:51.109 [2024-07-15 14:04:15.443221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:51.109 [2024-07-15 14:04:15.443232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:51.109 [2024-07-15 14:04:15.443246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:51.109 [2024-07-15 14:04:15.443258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:51.109 [2024-07-15 14:04:15.443271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:51.109 [2024-07-15 14:04:15.443283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:51.109 [2024-07-15 14:04:15.443298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:51.109 [2024-07-15 14:04:15.443326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:51.109 [2024-07-15 14:04:15.443344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:51.109 [2024-07-15 14:04:15.443356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:51.109 [2024-07-15 14:04:15.443369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:51.109 [2024-07-15 14:04:15.443381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:51.109 [2024-07-15 14:04:15.443394] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:51.109 [2024-07-15 14:04:15.443408] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:51.109 [2024-07-15 14:04:15.443423] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:51.109 [2024-07-15 14:04:15.443435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:51.109 [2024-07-15 14:04:15.443449] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:51.109 [2024-07-15 14:04:15.443460] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:51.109 [2024-07-15 14:04:15.443476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.109 [2024-07-15 14:04:15.443489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:51.109 [2024-07-15 14:04:15.443504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.086 ms 00:22:51.109 [2024-07-15 14:04:15.443515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.109 [2024-07-15 14:04:15.443598] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:51.109 [2024-07-15 14:04:15.443615] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:53.002 [2024-07-15 14:04:17.345150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.002 [2024-07-15 14:04:17.345230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:53.002 [2024-07-15 14:04:17.345259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1901.550 ms 00:22:53.002 [2024-07-15 14:04:17.345276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.002 [2024-07-15 14:04:17.383937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.002 [2024-07-15 14:04:17.384006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:53.002 [2024-07-15 14:04:17.384033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.223 ms 00:22:53.002 [2024-07-15 14:04:17.384050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.002 [2024-07-15 14:04:17.384270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.002 [2024-07-15 14:04:17.384314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:53.002 [2024-07-15 14:04:17.384339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:22:53.002 [2024-07-15 14:04:17.384357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.002 [2024-07-15 14:04:17.441579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.002 [2024-07-15 14:04:17.441664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:53.002 [2024-07-15 14:04:17.441699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.165 ms 00:22:53.002 [2024-07-15 14:04:17.441720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.002 [2024-07-15 14:04:17.441905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.002 [2024-07-15 14:04:17.441936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:53.002 [2024-07-15 14:04:17.441965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:53.002 [2024-07-15 14:04:17.442000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.002 [2024-07-15 14:04:17.442510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.002 [2024-07-15 14:04:17.442549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:53.002 [2024-07-15 14:04:17.442576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:22:53.002 [2024-07-15 14:04:17.442595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.002 [2024-07-15 14:04:17.442833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.002 [2024-07-15 14:04:17.442863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:53.002 [2024-07-15 14:04:17.442888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:22:53.002 [2024-07-15 14:04:17.442907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.002 [2024-07-15 14:04:17.466536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.002 [2024-07-15 14:04:17.466607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:53.002 [2024-07-15 14:04:17.466635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.564 ms 00:22:53.002 [2024-07-15 14:04:17.466651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.002 [2024-07-15 14:04:17.482852] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:53.002 [2024-07-15 14:04:17.498869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.002 [2024-07-15 14:04:17.498958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:53.002 [2024-07-15 14:04:17.498984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.034 ms 00:22:53.002 [2024-07-15 14:04:17.499002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.260 [2024-07-15 14:04:17.565271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.260 [2024-07-15 14:04:17.565359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:53.260 [2024-07-15 14:04:17.565384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.115 ms 00:22:53.260 [2024-07-15 14:04:17.565403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.260 [2024-07-15 14:04:17.565823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.260 [2024-07-15 14:04:17.565863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:53.260 [2024-07-15 14:04:17.565882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:22:53.260 [2024-07-15 14:04:17.565903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.260 [2024-07-15 14:04:17.600343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.260 [2024-07-15 14:04:17.600410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:53.260 [2024-07-15 14:04:17.600432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.386 ms 00:22:53.260 [2024-07-15 14:04:17.600447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.260 [2024-07-15 14:04:17.633872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.260 [2024-07-15 14:04:17.633979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:53.260 [2024-07-15 14:04:17.634015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.301 ms 00:22:53.260 [2024-07-15 14:04:17.634044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.260 [2024-07-15 14:04:17.635481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.260 [2024-07-15 14:04:17.635545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:53.260 [2024-07-15 14:04:17.635573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.249 ms 00:22:53.260 [2024-07-15 14:04:17.635600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.260 [2024-07-15 14:04:17.727733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.260 [2024-07-15 14:04:17.727809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:53.260 [2024-07-15 14:04:17.727832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.065 ms 00:22:53.260 [2024-07-15 14:04:17.727857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.260 [2024-07-15 14:04:17.760666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.260 [2024-07-15 14:04:17.760735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:53.260 [2024-07-15 14:04:17.760756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.708 ms 00:22:53.260 [2024-07-15 14:04:17.760775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.260 [2024-07-15 14:04:17.792536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.260 [2024-07-15 14:04:17.792605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:53.260 [2024-07-15 14:04:17.792625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.657 ms 00:22:53.260 [2024-07-15 14:04:17.792639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.517 [2024-07-15 14:04:17.824127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.517 [2024-07-15 14:04:17.824192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:53.517 [2024-07-15 14:04:17.824213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.389 ms 00:22:53.517 [2024-07-15 14:04:17.824228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.517 [2024-07-15 14:04:17.824367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.517 [2024-07-15 14:04:17.824393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:53.517 [2024-07-15 14:04:17.824408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:53.517 [2024-07-15 14:04:17.824425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.517 [2024-07-15 14:04:17.824517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.517 [2024-07-15 14:04:17.824537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:53.517 [2024-07-15 14:04:17.824550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:22:53.517 [2024-07-15 14:04:17.824587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.517 [2024-07-15 14:04:17.825589] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:53.517 [2024-07-15 14:04:17.829811] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2400.241 ms, result 0 00:22:53.517 [2024-07-15 14:04:17.830690] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:53.517 { 00:22:53.517 "name": "ftl0", 00:22:53.517 "uuid": "96438210-a730-46dd-94de-bf7d6eb48d99" 00:22:53.517 } 00:22:53.517 14:04:17 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:22:53.517 14:04:17 ftl.ftl_trim -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:22:53.517 14:04:17 ftl.ftl_trim -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:53.517 14:04:17 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local i 00:22:53.517 14:04:17 ftl.ftl_trim -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:53.517 14:04:17 ftl.ftl_trim -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:53.517 14:04:17 ftl.ftl_trim -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:22:53.774 14:04:18 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:22:54.033 [ 00:22:54.033 { 00:22:54.033 "name": "ftl0", 00:22:54.033 "aliases": [ 00:22:54.033 "96438210-a730-46dd-94de-bf7d6eb48d99" 00:22:54.033 ], 00:22:54.033 "product_name": "FTL disk", 00:22:54.033 "block_size": 4096, 00:22:54.033 "num_blocks": 23592960, 00:22:54.033 "uuid": "96438210-a730-46dd-94de-bf7d6eb48d99", 00:22:54.033 "assigned_rate_limits": { 00:22:54.033 "rw_ios_per_sec": 0, 00:22:54.033 "rw_mbytes_per_sec": 0, 00:22:54.033 "r_mbytes_per_sec": 0, 00:22:54.033 "w_mbytes_per_sec": 0 00:22:54.033 }, 00:22:54.033 "claimed": false, 00:22:54.033 "zoned": false, 00:22:54.033 "supported_io_types": { 00:22:54.033 "read": true, 00:22:54.033 "write": true, 00:22:54.033 "unmap": true, 00:22:54.033 "flush": true, 00:22:54.033 "reset": false, 00:22:54.033 "nvme_admin": false, 00:22:54.033 "nvme_io": false, 00:22:54.033 "nvme_io_md": false, 00:22:54.033 "write_zeroes": true, 00:22:54.033 "zcopy": false, 00:22:54.033 "get_zone_info": false, 00:22:54.033 "zone_management": false, 00:22:54.033 "zone_append": false, 00:22:54.033 "compare": false, 00:22:54.033 "compare_and_write": false, 00:22:54.033 "abort": false, 00:22:54.033 "seek_hole": false, 00:22:54.033 "seek_data": false, 00:22:54.033 "copy": false, 00:22:54.033 "nvme_iov_md": false 00:22:54.033 }, 00:22:54.033 "driver_specific": { 00:22:54.033 "ftl": { 00:22:54.033 "base_bdev": "0e1d315a-e7d7-4154-bdf3-06b7f0e80dbd", 00:22:54.033 "cache": "nvc0n1p0" 00:22:54.033 } 00:22:54.033 } 00:22:54.033 } 00:22:54.033 ] 00:22:54.033 14:04:18 ftl.ftl_trim -- common/autotest_common.sh@905 -- # return 0 00:22:54.033 14:04:18 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:22:54.033 14:04:18 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:54.291 14:04:18 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:22:54.291 14:04:18 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:22:54.548 14:04:19 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:22:54.548 { 00:22:54.548 "name": "ftl0", 00:22:54.548 "aliases": [ 00:22:54.548 "96438210-a730-46dd-94de-bf7d6eb48d99" 00:22:54.548 ], 00:22:54.548 "product_name": "FTL disk", 00:22:54.548 "block_size": 4096, 00:22:54.548 "num_blocks": 23592960, 00:22:54.548 "uuid": "96438210-a730-46dd-94de-bf7d6eb48d99", 00:22:54.548 "assigned_rate_limits": { 00:22:54.548 "rw_ios_per_sec": 0, 00:22:54.548 "rw_mbytes_per_sec": 0, 00:22:54.548 "r_mbytes_per_sec": 0, 00:22:54.548 "w_mbytes_per_sec": 0 00:22:54.548 }, 00:22:54.548 "claimed": false, 00:22:54.548 "zoned": false, 00:22:54.548 "supported_io_types": { 00:22:54.548 "read": true, 00:22:54.548 "write": true, 00:22:54.548 "unmap": true, 00:22:54.548 "flush": true, 00:22:54.548 "reset": false, 00:22:54.548 "nvme_admin": false, 00:22:54.548 "nvme_io": false, 00:22:54.548 "nvme_io_md": false, 00:22:54.548 "write_zeroes": true, 00:22:54.548 "zcopy": false, 00:22:54.548 "get_zone_info": false, 00:22:54.548 "zone_management": false, 00:22:54.548 "zone_append": false, 00:22:54.548 "compare": false, 00:22:54.548 "compare_and_write": false, 00:22:54.548 "abort": false, 00:22:54.548 "seek_hole": false, 00:22:54.548 "seek_data": false, 00:22:54.548 "copy": false, 00:22:54.548 "nvme_iov_md": false 00:22:54.548 }, 00:22:54.548 "driver_specific": { 00:22:54.548 "ftl": { 00:22:54.548 "base_bdev": "0e1d315a-e7d7-4154-bdf3-06b7f0e80dbd", 00:22:54.548 "cache": "nvc0n1p0" 00:22:54.548 } 00:22:54.548 } 00:22:54.548 } 00:22:54.548 ]' 00:22:54.548 14:04:19 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:22:54.548 14:04:19 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:22:54.548 14:04:19 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:54.806 [2024-07-15 14:04:19.314987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.806 [2024-07-15 14:04:19.315054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:54.806 [2024-07-15 14:04:19.315080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:54.806 [2024-07-15 14:04:19.315094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.806 [2024-07-15 14:04:19.315148] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:54.806 [2024-07-15 14:04:19.318491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.806 [2024-07-15 14:04:19.318532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:54.806 [2024-07-15 14:04:19.318548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.318 ms 00:22:54.806 [2024-07-15 14:04:19.318567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.806 [2024-07-15 14:04:19.319150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.806 [2024-07-15 14:04:19.319189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:54.806 [2024-07-15 14:04:19.319205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:22:54.806 [2024-07-15 14:04:19.319223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.806 [2024-07-15 14:04:19.322969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.806 [2024-07-15 14:04:19.323003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:54.806 [2024-07-15 14:04:19.323018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.712 ms 00:22:54.806 [2024-07-15 14:04:19.323031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.806 [2024-07-15 14:04:19.330639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.806 [2024-07-15 14:04:19.330684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:54.806 [2024-07-15 14:04:19.330700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.551 ms 00:22:54.806 [2024-07-15 14:04:19.330714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.065 [2024-07-15 14:04:19.361896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.065 [2024-07-15 14:04:19.361970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:55.065 [2024-07-15 14:04:19.361991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.056 ms 00:22:55.065 [2024-07-15 14:04:19.362009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.065 [2024-07-15 14:04:19.380731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.065 [2024-07-15 14:04:19.380815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:55.065 [2024-07-15 14:04:19.380837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.603 ms 00:22:55.065 [2024-07-15 14:04:19.380857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.065 [2024-07-15 14:04:19.381147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.065 [2024-07-15 14:04:19.381173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:55.065 [2024-07-15 14:04:19.381187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:22:55.065 [2024-07-15 14:04:19.381202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.065 [2024-07-15 14:04:19.413059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.065 [2024-07-15 14:04:19.413144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:55.065 [2024-07-15 14:04:19.413165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.818 ms 00:22:55.065 [2024-07-15 14:04:19.413180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.065 [2024-07-15 14:04:19.444769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.065 [2024-07-15 14:04:19.444849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:55.065 [2024-07-15 14:04:19.444870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.427 ms 00:22:55.065 [2024-07-15 14:04:19.444889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.065 [2024-07-15 14:04:19.475614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.065 [2024-07-15 14:04:19.475707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:55.065 [2024-07-15 14:04:19.475729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.606 ms 00:22:55.065 [2024-07-15 14:04:19.475743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.065 [2024-07-15 14:04:19.506641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.065 [2024-07-15 14:04:19.506717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:55.065 [2024-07-15 14:04:19.506738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.722 ms 00:22:55.065 [2024-07-15 14:04:19.506753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.065 [2024-07-15 14:04:19.506861] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:55.065 [2024-07-15 14:04:19.506895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:55.065 [2024-07-15 14:04:19.506911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:55.065 [2024-07-15 14:04:19.506925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:55.065 [2024-07-15 14:04:19.506938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:55.065 [2024-07-15 14:04:19.506952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:55.065 [2024-07-15 14:04:19.506965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:55.065 [2024-07-15 14:04:19.506982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:55.065 [2024-07-15 14:04:19.506994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:55.065 [2024-07-15 14:04:19.507008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:55.065 [2024-07-15 14:04:19.507021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:55.065 [2024-07-15 14:04:19.507035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:55.065 [2024-07-15 14:04:19.507047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:55.065 [2024-07-15 14:04:19.507061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:55.065 [2024-07-15 14:04:19.507074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:55.065 [2024-07-15 14:04:19.507089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:55.065 [2024-07-15 14:04:19.507101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:55.065 [2024-07-15 14:04:19.507115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:55.065 [2024-07-15 14:04:19.507127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:55.065 [2024-07-15 14:04:19.507141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:55.065 [2024-07-15 14:04:19.507153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:55.065 [2024-07-15 14:04:19.507167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:55.065 [2024-07-15 14:04:19.507180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:55.065 [2024-07-15 14:04:19.507199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:55.065 [2024-07-15 14:04:19.507212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:55.065 [2024-07-15 14:04:19.507226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:55.065 [2024-07-15 14:04:19.507238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:55.065 [2024-07-15 14:04:19.507253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:55.065 [2024-07-15 14:04:19.507266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:55.065 [2024-07-15 14:04:19.507279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:55.065 [2024-07-15 14:04:19.507335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.507994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.508008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.508021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.508035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.508047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.508061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.508073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.508089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.508101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.508115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.508127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.508141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.508153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.508167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.508179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.508193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.508207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.508221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.508233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.508246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.508258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:55.066 [2024-07-15 14:04:19.508284] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:55.066 [2024-07-15 14:04:19.508296] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 96438210-a730-46dd-94de-bf7d6eb48d99 00:22:55.066 [2024-07-15 14:04:19.508325] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:55.066 [2024-07-15 14:04:19.508341] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:55.066 [2024-07-15 14:04:19.508354] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:55.066 [2024-07-15 14:04:19.508366] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:55.066 [2024-07-15 14:04:19.508378] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:55.066 [2024-07-15 14:04:19.508390] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:55.066 [2024-07-15 14:04:19.508403] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:55.066 [2024-07-15 14:04:19.508413] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:55.066 [2024-07-15 14:04:19.508425] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:55.066 [2024-07-15 14:04:19.508437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.066 [2024-07-15 14:04:19.508450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:55.066 [2024-07-15 14:04:19.508463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.578 ms 00:22:55.066 [2024-07-15 14:04:19.508476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.066 [2024-07-15 14:04:19.525160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.066 [2024-07-15 14:04:19.525218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:55.066 [2024-07-15 14:04:19.525236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.640 ms 00:22:55.066 [2024-07-15 14:04:19.525254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.066 [2024-07-15 14:04:19.525794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.066 [2024-07-15 14:04:19.525825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:55.066 [2024-07-15 14:04:19.525841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:22:55.066 [2024-07-15 14:04:19.525856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.066 [2024-07-15 14:04:19.583941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.066 [2024-07-15 14:04:19.584017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:55.066 [2024-07-15 14:04:19.584038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.066 [2024-07-15 14:04:19.584052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.066 [2024-07-15 14:04:19.584209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.066 [2024-07-15 14:04:19.584232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:55.066 [2024-07-15 14:04:19.584246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.067 [2024-07-15 14:04:19.584260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.067 [2024-07-15 14:04:19.584377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.067 [2024-07-15 14:04:19.584402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:55.067 [2024-07-15 14:04:19.584415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.067 [2024-07-15 14:04:19.584431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.067 [2024-07-15 14:04:19.584468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.067 [2024-07-15 14:04:19.584485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:55.067 [2024-07-15 14:04:19.584497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.067 [2024-07-15 14:04:19.584510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.326 [2024-07-15 14:04:19.689655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.326 [2024-07-15 14:04:19.689730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:55.326 [2024-07-15 14:04:19.689749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.326 [2024-07-15 14:04:19.689764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.326 [2024-07-15 14:04:19.774911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.326 [2024-07-15 14:04:19.774992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:55.326 [2024-07-15 14:04:19.775014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.326 [2024-07-15 14:04:19.775029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.326 [2024-07-15 14:04:19.775157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.326 [2024-07-15 14:04:19.775184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:55.326 [2024-07-15 14:04:19.775197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.326 [2024-07-15 14:04:19.775214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.326 [2024-07-15 14:04:19.775272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.326 [2024-07-15 14:04:19.775290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:55.326 [2024-07-15 14:04:19.775331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.326 [2024-07-15 14:04:19.775350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.326 [2024-07-15 14:04:19.775504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.326 [2024-07-15 14:04:19.775530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:55.326 [2024-07-15 14:04:19.775564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.326 [2024-07-15 14:04:19.775580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.326 [2024-07-15 14:04:19.775653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.326 [2024-07-15 14:04:19.775676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:55.326 [2024-07-15 14:04:19.775688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.326 [2024-07-15 14:04:19.775702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.326 [2024-07-15 14:04:19.775759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.326 [2024-07-15 14:04:19.775786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:55.326 [2024-07-15 14:04:19.775801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.326 [2024-07-15 14:04:19.775818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.326 [2024-07-15 14:04:19.775886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.326 [2024-07-15 14:04:19.775906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:55.326 [2024-07-15 14:04:19.775920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.326 [2024-07-15 14:04:19.775934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.326 [2024-07-15 14:04:19.776145] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 461.151 ms, result 0 00:22:55.326 true 00:22:55.326 14:04:19 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 80893 00:22:55.326 14:04:19 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 80893 ']' 00:22:55.326 14:04:19 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 80893 00:22:55.326 14:04:19 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:22:55.326 14:04:19 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:55.326 14:04:19 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80893 00:22:55.326 killing process with pid 80893 00:22:55.326 14:04:19 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:55.326 14:04:19 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:55.326 14:04:19 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80893' 00:22:55.326 14:04:19 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 80893 00:22:55.326 14:04:19 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 80893 00:23:00.669 14:04:24 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:23:01.235 65536+0 records in 00:23:01.235 65536+0 records out 00:23:01.235 268435456 bytes (268 MB, 256 MiB) copied, 1.20335 s, 223 MB/s 00:23:01.235 14:04:25 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:01.493 [2024-07-15 14:04:25.792438] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:23:01.493 [2024-07-15 14:04:25.792621] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81097 ] 00:23:01.493 [2024-07-15 14:04:25.964174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.751 [2024-07-15 14:04:26.183549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.009 [2024-07-15 14:04:26.493454] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:02.009 [2024-07-15 14:04:26.493552] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:02.269 [2024-07-15 14:04:26.654713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.269 [2024-07-15 14:04:26.654785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:02.269 [2024-07-15 14:04:26.654807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:02.269 [2024-07-15 14:04:26.654819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.269 [2024-07-15 14:04:26.658014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.269 [2024-07-15 14:04:26.658061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:02.269 [2024-07-15 14:04:26.658079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.163 ms 00:23:02.269 [2024-07-15 14:04:26.658103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.269 [2024-07-15 14:04:26.658229] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:02.269 [2024-07-15 14:04:26.659230] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:02.269 [2024-07-15 14:04:26.659275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.269 [2024-07-15 14:04:26.659290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:02.269 [2024-07-15 14:04:26.659325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.058 ms 00:23:02.269 [2024-07-15 14:04:26.659340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.269 [2024-07-15 14:04:26.660594] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:02.269 [2024-07-15 14:04:26.676965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.269 [2024-07-15 14:04:26.677012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:02.269 [2024-07-15 14:04:26.677038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.372 ms 00:23:02.269 [2024-07-15 14:04:26.677051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.269 [2024-07-15 14:04:26.677174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.269 [2024-07-15 14:04:26.677196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:02.269 [2024-07-15 14:04:26.677210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:02.269 [2024-07-15 14:04:26.677221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.269 [2024-07-15 14:04:26.681752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.269 [2024-07-15 14:04:26.681811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:02.269 [2024-07-15 14:04:26.681829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.470 ms 00:23:02.269 [2024-07-15 14:04:26.681841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.269 [2024-07-15 14:04:26.681986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.269 [2024-07-15 14:04:26.682008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:02.269 [2024-07-15 14:04:26.682022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:23:02.269 [2024-07-15 14:04:26.682033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.269 [2024-07-15 14:04:26.682081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.269 [2024-07-15 14:04:26.682098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:02.269 [2024-07-15 14:04:26.682111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:02.269 [2024-07-15 14:04:26.682126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.269 [2024-07-15 14:04:26.682159] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:02.269 [2024-07-15 14:04:26.686474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.269 [2024-07-15 14:04:26.686530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:02.269 [2024-07-15 14:04:26.686547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.326 ms 00:23:02.269 [2024-07-15 14:04:26.686559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.269 [2024-07-15 14:04:26.686636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.269 [2024-07-15 14:04:26.686654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:02.269 [2024-07-15 14:04:26.686667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:02.269 [2024-07-15 14:04:26.686677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.269 [2024-07-15 14:04:26.686711] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:02.269 [2024-07-15 14:04:26.686739] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:02.269 [2024-07-15 14:04:26.686786] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:02.269 [2024-07-15 14:04:26.686807] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:23:02.269 [2024-07-15 14:04:26.686914] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:02.269 [2024-07-15 14:04:26.686929] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:02.269 [2024-07-15 14:04:26.686943] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:02.269 [2024-07-15 14:04:26.686958] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:02.269 [2024-07-15 14:04:26.686971] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:02.269 [2024-07-15 14:04:26.686982] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:02.269 [2024-07-15 14:04:26.686997] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:02.269 [2024-07-15 14:04:26.687008] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:02.269 [2024-07-15 14:04:26.687019] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:02.269 [2024-07-15 14:04:26.687030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.269 [2024-07-15 14:04:26.687041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:02.269 [2024-07-15 14:04:26.687052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:23:02.269 [2024-07-15 14:04:26.687063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.269 [2024-07-15 14:04:26.687161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.269 [2024-07-15 14:04:26.687176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:02.269 [2024-07-15 14:04:26.687188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:02.269 [2024-07-15 14:04:26.687203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.269 [2024-07-15 14:04:26.687353] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:02.270 [2024-07-15 14:04:26.687374] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:02.270 [2024-07-15 14:04:26.687386] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:02.270 [2024-07-15 14:04:26.687398] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.270 [2024-07-15 14:04:26.687414] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:02.270 [2024-07-15 14:04:26.687425] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:02.270 [2024-07-15 14:04:26.687436] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:02.270 [2024-07-15 14:04:26.687446] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:02.270 [2024-07-15 14:04:26.687456] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:02.270 [2024-07-15 14:04:26.687465] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:02.270 [2024-07-15 14:04:26.687476] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:02.270 [2024-07-15 14:04:26.687486] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:02.270 [2024-07-15 14:04:26.687496] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:02.270 [2024-07-15 14:04:26.687506] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:02.270 [2024-07-15 14:04:26.687516] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:02.270 [2024-07-15 14:04:26.687526] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.270 [2024-07-15 14:04:26.687536] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:02.270 [2024-07-15 14:04:26.687546] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:02.270 [2024-07-15 14:04:26.687571] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.270 [2024-07-15 14:04:26.687582] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:02.270 [2024-07-15 14:04:26.687592] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:02.270 [2024-07-15 14:04:26.687602] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.270 [2024-07-15 14:04:26.687612] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:02.270 [2024-07-15 14:04:26.687622] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:02.270 [2024-07-15 14:04:26.687632] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.270 [2024-07-15 14:04:26.687642] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:02.270 [2024-07-15 14:04:26.687653] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:02.270 [2024-07-15 14:04:26.687662] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.270 [2024-07-15 14:04:26.687672] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:02.270 [2024-07-15 14:04:26.687682] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:02.270 [2024-07-15 14:04:26.687692] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.270 [2024-07-15 14:04:26.687702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:02.270 [2024-07-15 14:04:26.687712] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:02.270 [2024-07-15 14:04:26.687721] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:02.270 [2024-07-15 14:04:26.687732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:02.270 [2024-07-15 14:04:26.687742] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:02.270 [2024-07-15 14:04:26.687755] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:02.270 [2024-07-15 14:04:26.687765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:02.270 [2024-07-15 14:04:26.687776] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:02.270 [2024-07-15 14:04:26.687786] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.270 [2024-07-15 14:04:26.687796] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:02.270 [2024-07-15 14:04:26.687806] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:02.270 [2024-07-15 14:04:26.687816] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.270 [2024-07-15 14:04:26.687825] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:02.270 [2024-07-15 14:04:26.687836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:02.270 [2024-07-15 14:04:26.687847] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:02.270 [2024-07-15 14:04:26.687857] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.270 [2024-07-15 14:04:26.687868] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:02.270 [2024-07-15 14:04:26.687879] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:02.270 [2024-07-15 14:04:26.687888] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:02.270 [2024-07-15 14:04:26.687899] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:02.270 [2024-07-15 14:04:26.687909] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:02.270 [2024-07-15 14:04:26.687919] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:02.270 [2024-07-15 14:04:26.687930] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:02.270 [2024-07-15 14:04:26.687949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:02.270 [2024-07-15 14:04:26.687962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:02.270 [2024-07-15 14:04:26.687975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:02.270 [2024-07-15 14:04:26.687985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:02.270 [2024-07-15 14:04:26.687997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:02.270 [2024-07-15 14:04:26.688008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:02.270 [2024-07-15 14:04:26.688019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:02.270 [2024-07-15 14:04:26.688030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:02.270 [2024-07-15 14:04:26.688040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:02.270 [2024-07-15 14:04:26.688051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:02.270 [2024-07-15 14:04:26.688062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:02.270 [2024-07-15 14:04:26.688073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:02.270 [2024-07-15 14:04:26.688083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:02.270 [2024-07-15 14:04:26.688094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:02.270 [2024-07-15 14:04:26.688108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:02.270 [2024-07-15 14:04:26.688119] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:02.270 [2024-07-15 14:04:26.688132] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:02.270 [2024-07-15 14:04:26.688144] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:02.270 [2024-07-15 14:04:26.688155] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:02.270 [2024-07-15 14:04:26.688166] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:02.270 [2024-07-15 14:04:26.688177] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:02.270 [2024-07-15 14:04:26.688189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.270 [2024-07-15 14:04:26.688201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:02.270 [2024-07-15 14:04:26.688212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.919 ms 00:23:02.270 [2024-07-15 14:04:26.688223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.270 [2024-07-15 14:04:26.729281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.270 [2024-07-15 14:04:26.729350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:02.270 [2024-07-15 14:04:26.729371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.984 ms 00:23:02.270 [2024-07-15 14:04:26.729384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.270 [2024-07-15 14:04:26.729587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.270 [2024-07-15 14:04:26.729608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:02.270 [2024-07-15 14:04:26.729621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:02.270 [2024-07-15 14:04:26.729640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.270 [2024-07-15 14:04:26.768292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.270 [2024-07-15 14:04:26.768359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:02.270 [2024-07-15 14:04:26.768379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.616 ms 00:23:02.270 [2024-07-15 14:04:26.768391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.270 [2024-07-15 14:04:26.768520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.270 [2024-07-15 14:04:26.768540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:02.270 [2024-07-15 14:04:26.768554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:02.270 [2024-07-15 14:04:26.768565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.270 [2024-07-15 14:04:26.768896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.270 [2024-07-15 14:04:26.768915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:02.270 [2024-07-15 14:04:26.768929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:23:02.270 [2024-07-15 14:04:26.768940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.270 [2024-07-15 14:04:26.769096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.270 [2024-07-15 14:04:26.769118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:02.270 [2024-07-15 14:04:26.769130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:23:02.270 [2024-07-15 14:04:26.769141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.270 [2024-07-15 14:04:26.785544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.270 [2024-07-15 14:04:26.785606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:02.270 [2024-07-15 14:04:26.785625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.370 ms 00:23:02.270 [2024-07-15 14:04:26.785638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.270 [2024-07-15 14:04:26.802070] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:23:02.271 [2024-07-15 14:04:26.802124] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:02.271 [2024-07-15 14:04:26.802144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.271 [2024-07-15 14:04:26.802157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:02.271 [2024-07-15 14:04:26.802172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.321 ms 00:23:02.271 [2024-07-15 14:04:26.802183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.529 [2024-07-15 14:04:26.832144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.529 [2024-07-15 14:04:26.832207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:02.529 [2024-07-15 14:04:26.832227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.850 ms 00:23:02.529 [2024-07-15 14:04:26.832239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.529 [2024-07-15 14:04:26.848170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.529 [2024-07-15 14:04:26.848214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:02.529 [2024-07-15 14:04:26.848231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.784 ms 00:23:02.529 [2024-07-15 14:04:26.848242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.529 [2024-07-15 14:04:26.863803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.529 [2024-07-15 14:04:26.863847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:02.529 [2024-07-15 14:04:26.863863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.447 ms 00:23:02.529 [2024-07-15 14:04:26.863875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.529 [2024-07-15 14:04:26.864704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.529 [2024-07-15 14:04:26.864729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:02.529 [2024-07-15 14:04:26.864749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:23:02.529 [2024-07-15 14:04:26.864761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.529 [2024-07-15 14:04:26.941827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.529 [2024-07-15 14:04:26.941895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:02.529 [2024-07-15 14:04:26.941929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.027 ms 00:23:02.529 [2024-07-15 14:04:26.941942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.529 [2024-07-15 14:04:26.955191] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:02.529 [2024-07-15 14:04:26.969327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.529 [2024-07-15 14:04:26.969393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:02.529 [2024-07-15 14:04:26.969414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.216 ms 00:23:02.529 [2024-07-15 14:04:26.969427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.529 [2024-07-15 14:04:26.969565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.529 [2024-07-15 14:04:26.969586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:02.529 [2024-07-15 14:04:26.969604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:02.529 [2024-07-15 14:04:26.969616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.529 [2024-07-15 14:04:26.969683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.529 [2024-07-15 14:04:26.969700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:02.529 [2024-07-15 14:04:26.969712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:02.529 [2024-07-15 14:04:26.969731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.529 [2024-07-15 14:04:26.969764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.529 [2024-07-15 14:04:26.969779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:02.529 [2024-07-15 14:04:26.969791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:02.529 [2024-07-15 14:04:26.969807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.529 [2024-07-15 14:04:26.969845] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:02.529 [2024-07-15 14:04:26.969861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.529 [2024-07-15 14:04:26.969872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:02.529 [2024-07-15 14:04:26.969884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:02.529 [2024-07-15 14:04:26.969896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.529 [2024-07-15 14:04:27.001363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.529 [2024-07-15 14:04:27.001440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:02.529 [2024-07-15 14:04:27.001472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.434 ms 00:23:02.529 [2024-07-15 14:04:27.001485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.529 [2024-07-15 14:04:27.001644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.529 [2024-07-15 14:04:27.001666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:02.529 [2024-07-15 14:04:27.001679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:23:02.529 [2024-07-15 14:04:27.001691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.529 [2024-07-15 14:04:27.002683] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:02.529 [2024-07-15 14:04:27.006876] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 347.590 ms, result 0 00:23:02.529 [2024-07-15 14:04:27.007731] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:02.529 [2024-07-15 14:04:27.024260] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:11.915  Copying: 27/256 [MB] (27 MBps) Copying: 55/256 [MB] (28 MBps) Copying: 82/256 [MB] (26 MBps) Copying: 108/256 [MB] (26 MBps) Copying: 136/256 [MB] (27 MBps) Copying: 165/256 [MB] (28 MBps) Copying: 193/256 [MB] (27 MBps) Copying: 222/256 [MB] (28 MBps) Copying: 250/256 [MB] (28 MBps) Copying: 256/256 [MB] (average 27 MBps)[2024-07-15 14:04:36.205919] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:11.915 [2024-07-15 14:04:36.218608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.915 [2024-07-15 14:04:36.218650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:11.915 [2024-07-15 14:04:36.218670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:11.915 [2024-07-15 14:04:36.218682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.915 [2024-07-15 14:04:36.218714] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:11.915 [2024-07-15 14:04:36.221989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.915 [2024-07-15 14:04:36.222016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:11.915 [2024-07-15 14:04:36.222038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.254 ms 00:23:11.915 [2024-07-15 14:04:36.222050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.915 [2024-07-15 14:04:36.223609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.915 [2024-07-15 14:04:36.223648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:11.915 [2024-07-15 14:04:36.223664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.529 ms 00:23:11.915 [2024-07-15 14:04:36.223675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.915 [2024-07-15 14:04:36.230710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.915 [2024-07-15 14:04:36.230747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:11.915 [2024-07-15 14:04:36.230762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.011 ms 00:23:11.915 [2024-07-15 14:04:36.230781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.915 [2024-07-15 14:04:36.238294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.915 [2024-07-15 14:04:36.238339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:11.915 [2024-07-15 14:04:36.238355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.449 ms 00:23:11.915 [2024-07-15 14:04:36.238366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.915 [2024-07-15 14:04:36.269629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.915 [2024-07-15 14:04:36.269673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:11.915 [2024-07-15 14:04:36.269690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.210 ms 00:23:11.915 [2024-07-15 14:04:36.269701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.915 [2024-07-15 14:04:36.287806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.915 [2024-07-15 14:04:36.287857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:11.915 [2024-07-15 14:04:36.287876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.031 ms 00:23:11.915 [2024-07-15 14:04:36.287888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.915 [2024-07-15 14:04:36.288081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.915 [2024-07-15 14:04:36.288103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:11.915 [2024-07-15 14:04:36.288117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:23:11.915 [2024-07-15 14:04:36.288128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.915 [2024-07-15 14:04:36.319574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.915 [2024-07-15 14:04:36.319625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:11.915 [2024-07-15 14:04:36.319643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.422 ms 00:23:11.915 [2024-07-15 14:04:36.319655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.915 [2024-07-15 14:04:36.350493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.915 [2024-07-15 14:04:36.350542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:11.915 [2024-07-15 14:04:36.350559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.766 ms 00:23:11.915 [2024-07-15 14:04:36.350571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.915 [2024-07-15 14:04:36.381266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.915 [2024-07-15 14:04:36.381334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:11.915 [2024-07-15 14:04:36.381353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.620 ms 00:23:11.915 [2024-07-15 14:04:36.381364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.915 [2024-07-15 14:04:36.411922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.915 [2024-07-15 14:04:36.411966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:11.915 [2024-07-15 14:04:36.411984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.441 ms 00:23:11.915 [2024-07-15 14:04:36.411995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.915 [2024-07-15 14:04:36.412064] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:11.915 [2024-07-15 14:04:36.412089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:11.915 [2024-07-15 14:04:36.412572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.412989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.413003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.413021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.413040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.413057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.413074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.413092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.413113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.413130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.413148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.413165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.413184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.413201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.413218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.413235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.413252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.413270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.413287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.413317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.413336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.413353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.413370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.413388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.413405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.413422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.413441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.413458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:11.916 [2024-07-15 14:04:36.413486] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:11.916 [2024-07-15 14:04:36.413502] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 96438210-a730-46dd-94de-bf7d6eb48d99 00:23:11.916 [2024-07-15 14:04:36.413520] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:11.916 [2024-07-15 14:04:36.413536] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:11.916 [2024-07-15 14:04:36.413551] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:11.916 [2024-07-15 14:04:36.413583] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:11.916 [2024-07-15 14:04:36.413599] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:11.916 [2024-07-15 14:04:36.413616] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:11.916 [2024-07-15 14:04:36.413631] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:11.916 [2024-07-15 14:04:36.413646] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:11.916 [2024-07-15 14:04:36.413661] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:11.916 [2024-07-15 14:04:36.413677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.916 [2024-07-15 14:04:36.413693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:11.916 [2024-07-15 14:04:36.413710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.614 ms 00:23:11.916 [2024-07-15 14:04:36.413732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.916 [2024-07-15 14:04:36.430226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.916 [2024-07-15 14:04:36.430266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:11.916 [2024-07-15 14:04:36.430283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.457 ms 00:23:11.916 [2024-07-15 14:04:36.430296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.916 [2024-07-15 14:04:36.430784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.916 [2024-07-15 14:04:36.430808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:11.916 [2024-07-15 14:04:36.430830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:23:11.916 [2024-07-15 14:04:36.430842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.173 [2024-07-15 14:04:36.470913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.173 [2024-07-15 14:04:36.470975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:12.173 [2024-07-15 14:04:36.470992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.173 [2024-07-15 14:04:36.471004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.173 [2024-07-15 14:04:36.471118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.173 [2024-07-15 14:04:36.471135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:12.173 [2024-07-15 14:04:36.471155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.173 [2024-07-15 14:04:36.471166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.173 [2024-07-15 14:04:36.471230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.173 [2024-07-15 14:04:36.471249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:12.173 [2024-07-15 14:04:36.471261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.173 [2024-07-15 14:04:36.471272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.173 [2024-07-15 14:04:36.471296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.173 [2024-07-15 14:04:36.471325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:12.173 [2024-07-15 14:04:36.471338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.173 [2024-07-15 14:04:36.471355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.173 [2024-07-15 14:04:36.571116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.173 [2024-07-15 14:04:36.571180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:12.173 [2024-07-15 14:04:36.571198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.173 [2024-07-15 14:04:36.571210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.173 [2024-07-15 14:04:36.654871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.173 [2024-07-15 14:04:36.654934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:12.173 [2024-07-15 14:04:36.654953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.173 [2024-07-15 14:04:36.654973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.173 [2024-07-15 14:04:36.655060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.173 [2024-07-15 14:04:36.655077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:12.173 [2024-07-15 14:04:36.655089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.173 [2024-07-15 14:04:36.655100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.173 [2024-07-15 14:04:36.655134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.173 [2024-07-15 14:04:36.655147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:12.173 [2024-07-15 14:04:36.655158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.173 [2024-07-15 14:04:36.655169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.173 [2024-07-15 14:04:36.655293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.173 [2024-07-15 14:04:36.655336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:12.173 [2024-07-15 14:04:36.655351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.173 [2024-07-15 14:04:36.655363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.173 [2024-07-15 14:04:36.655414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.173 [2024-07-15 14:04:36.655431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:12.174 [2024-07-15 14:04:36.655443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.174 [2024-07-15 14:04:36.655455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.174 [2024-07-15 14:04:36.655507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.174 [2024-07-15 14:04:36.655523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:12.174 [2024-07-15 14:04:36.655534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.174 [2024-07-15 14:04:36.655545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.174 [2024-07-15 14:04:36.655601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.174 [2024-07-15 14:04:36.655619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:12.174 [2024-07-15 14:04:36.655631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.174 [2024-07-15 14:04:36.655642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.174 [2024-07-15 14:04:36.655811] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 437.199 ms, result 0 00:23:13.547 00:23:13.547 00:23:13.547 14:04:37 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=81222 00:23:13.547 14:04:37 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:23:13.547 14:04:37 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 81222 00:23:13.547 14:04:37 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 81222 ']' 00:23:13.547 14:04:37 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.547 14:04:37 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:13.547 14:04:37 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.547 14:04:37 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:13.547 14:04:37 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:13.547 [2024-07-15 14:04:38.001277] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:23:13.547 [2024-07-15 14:04:38.001452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81222 ] 00:23:13.804 [2024-07-15 14:04:38.163481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.804 [2024-07-15 14:04:38.346103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.736 14:04:39 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:14.736 14:04:39 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:23:14.736 14:04:39 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:23:14.736 [2024-07-15 14:04:39.262255] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:14.736 [2024-07-15 14:04:39.262358] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:14.994 [2024-07-15 14:04:39.439915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.994 [2024-07-15 14:04:39.439990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:14.995 [2024-07-15 14:04:39.440011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:14.995 [2024-07-15 14:04:39.440026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.995 [2024-07-15 14:04:39.443197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.995 [2024-07-15 14:04:39.443246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:14.995 [2024-07-15 14:04:39.443265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.143 ms 00:23:14.995 [2024-07-15 14:04:39.443279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.995 [2024-07-15 14:04:39.443450] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:14.995 [2024-07-15 14:04:39.444414] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:14.995 [2024-07-15 14:04:39.444456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.995 [2024-07-15 14:04:39.444475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:14.995 [2024-07-15 14:04:39.444489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.017 ms 00:23:14.995 [2024-07-15 14:04:39.444503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.995 [2024-07-15 14:04:39.445711] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:14.995 [2024-07-15 14:04:39.461820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.995 [2024-07-15 14:04:39.461890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:14.995 [2024-07-15 14:04:39.461916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.105 ms 00:23:14.995 [2024-07-15 14:04:39.461929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.995 [2024-07-15 14:04:39.462049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.995 [2024-07-15 14:04:39.462070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:14.995 [2024-07-15 14:04:39.462086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:14.995 [2024-07-15 14:04:39.462098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.995 [2024-07-15 14:04:39.466388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.995 [2024-07-15 14:04:39.466434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:14.995 [2024-07-15 14:04:39.466458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.221 ms 00:23:14.995 [2024-07-15 14:04:39.466471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.995 [2024-07-15 14:04:39.466632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.995 [2024-07-15 14:04:39.466654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:14.995 [2024-07-15 14:04:39.466673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:23:14.995 [2024-07-15 14:04:39.466686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.995 [2024-07-15 14:04:39.466745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.995 [2024-07-15 14:04:39.466761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:14.995 [2024-07-15 14:04:39.466778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:14.995 [2024-07-15 14:04:39.466791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.995 [2024-07-15 14:04:39.466832] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:14.995 [2024-07-15 14:04:39.471047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.995 [2024-07-15 14:04:39.471093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:14.995 [2024-07-15 14:04:39.471111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.231 ms 00:23:14.995 [2024-07-15 14:04:39.471128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.995 [2024-07-15 14:04:39.471200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.995 [2024-07-15 14:04:39.471232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:14.995 [2024-07-15 14:04:39.471246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:14.995 [2024-07-15 14:04:39.471269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.995 [2024-07-15 14:04:39.471315] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:14.995 [2024-07-15 14:04:39.471355] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:14.995 [2024-07-15 14:04:39.471413] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:14.995 [2024-07-15 14:04:39.471445] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:23:14.995 [2024-07-15 14:04:39.471552] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:14.995 [2024-07-15 14:04:39.471577] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:14.995 [2024-07-15 14:04:39.471598] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:14.995 [2024-07-15 14:04:39.471620] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:14.995 [2024-07-15 14:04:39.471635] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:14.995 [2024-07-15 14:04:39.471652] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:14.995 [2024-07-15 14:04:39.471664] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:14.995 [2024-07-15 14:04:39.471681] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:14.995 [2024-07-15 14:04:39.471693] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:14.995 [2024-07-15 14:04:39.471715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.995 [2024-07-15 14:04:39.471727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:14.995 [2024-07-15 14:04:39.471744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:23:14.995 [2024-07-15 14:04:39.471756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.995 [2024-07-15 14:04:39.471867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.995 [2024-07-15 14:04:39.471882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:14.995 [2024-07-15 14:04:39.471900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:14.995 [2024-07-15 14:04:39.471913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.995 [2024-07-15 14:04:39.472043] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:14.995 [2024-07-15 14:04:39.472063] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:14.995 [2024-07-15 14:04:39.472081] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:14.995 [2024-07-15 14:04:39.472094] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.995 [2024-07-15 14:04:39.472111] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:14.995 [2024-07-15 14:04:39.472122] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:14.995 [2024-07-15 14:04:39.472141] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:14.995 [2024-07-15 14:04:39.472154] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:14.995 [2024-07-15 14:04:39.472174] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:14.995 [2024-07-15 14:04:39.472186] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:14.995 [2024-07-15 14:04:39.472201] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:14.995 [2024-07-15 14:04:39.472213] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:14.995 [2024-07-15 14:04:39.472229] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:14.995 [2024-07-15 14:04:39.472241] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:14.995 [2024-07-15 14:04:39.472257] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:14.995 [2024-07-15 14:04:39.472268] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.995 [2024-07-15 14:04:39.472283] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:14.995 [2024-07-15 14:04:39.472295] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:14.995 [2024-07-15 14:04:39.472328] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.995 [2024-07-15 14:04:39.472342] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:14.995 [2024-07-15 14:04:39.472358] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:14.995 [2024-07-15 14:04:39.472370] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:14.995 [2024-07-15 14:04:39.472386] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:14.995 [2024-07-15 14:04:39.472397] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:14.995 [2024-07-15 14:04:39.472417] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:14.995 [2024-07-15 14:04:39.472429] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:14.995 [2024-07-15 14:04:39.472444] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:14.995 [2024-07-15 14:04:39.472469] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:14.995 [2024-07-15 14:04:39.472486] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:14.995 [2024-07-15 14:04:39.472498] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:14.995 [2024-07-15 14:04:39.472515] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:14.995 [2024-07-15 14:04:39.472528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:14.995 [2024-07-15 14:04:39.472543] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:14.995 [2024-07-15 14:04:39.472555] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:14.995 [2024-07-15 14:04:39.472571] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:14.995 [2024-07-15 14:04:39.472583] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:14.995 [2024-07-15 14:04:39.472598] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:14.995 [2024-07-15 14:04:39.472610] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:14.995 [2024-07-15 14:04:39.472625] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:14.995 [2024-07-15 14:04:39.472637] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.995 [2024-07-15 14:04:39.472655] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:14.995 [2024-07-15 14:04:39.472667] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:14.995 [2024-07-15 14:04:39.472680] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.995 [2024-07-15 14:04:39.472691] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:14.995 [2024-07-15 14:04:39.472708] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:14.995 [2024-07-15 14:04:39.472719] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:14.995 [2024-07-15 14:04:39.472732] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.995 [2024-07-15 14:04:39.472744] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:14.995 [2024-07-15 14:04:39.472757] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:14.995 [2024-07-15 14:04:39.472768] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:14.996 [2024-07-15 14:04:39.472782] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:14.996 [2024-07-15 14:04:39.472793] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:14.996 [2024-07-15 14:04:39.472806] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:14.996 [2024-07-15 14:04:39.472819] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:14.996 [2024-07-15 14:04:39.472835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:14.996 [2024-07-15 14:04:39.472849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:14.996 [2024-07-15 14:04:39.472867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:14.996 [2024-07-15 14:04:39.472879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:14.996 [2024-07-15 14:04:39.472893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:14.996 [2024-07-15 14:04:39.472905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:14.996 [2024-07-15 14:04:39.472918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:14.996 [2024-07-15 14:04:39.472930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:14.996 [2024-07-15 14:04:39.472945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:14.996 [2024-07-15 14:04:39.472956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:14.996 [2024-07-15 14:04:39.472970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:14.996 [2024-07-15 14:04:39.472981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:14.996 [2024-07-15 14:04:39.472996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:14.996 [2024-07-15 14:04:39.473008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:14.996 [2024-07-15 14:04:39.473022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:14.996 [2024-07-15 14:04:39.473034] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:14.996 [2024-07-15 14:04:39.473048] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:14.996 [2024-07-15 14:04:39.473062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:14.996 [2024-07-15 14:04:39.473077] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:14.996 [2024-07-15 14:04:39.473090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:14.996 [2024-07-15 14:04:39.473103] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:14.996 [2024-07-15 14:04:39.473116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.996 [2024-07-15 14:04:39.473131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:14.996 [2024-07-15 14:04:39.473143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.150 ms 00:23:14.996 [2024-07-15 14:04:39.473156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.996 [2024-07-15 14:04:39.506020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.996 [2024-07-15 14:04:39.506089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:14.996 [2024-07-15 14:04:39.506111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.760 ms 00:23:14.996 [2024-07-15 14:04:39.506130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.996 [2024-07-15 14:04:39.506343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.996 [2024-07-15 14:04:39.506371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:14.996 [2024-07-15 14:04:39.506385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:23:14.996 [2024-07-15 14:04:39.506399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.254 [2024-07-15 14:04:39.545410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.254 [2024-07-15 14:04:39.545473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:15.254 [2024-07-15 14:04:39.545492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.979 ms 00:23:15.254 [2024-07-15 14:04:39.545507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.254 [2024-07-15 14:04:39.545613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.254 [2024-07-15 14:04:39.545636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:15.254 [2024-07-15 14:04:39.545650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:15.254 [2024-07-15 14:04:39.545664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.254 [2024-07-15 14:04:39.545989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.254 [2024-07-15 14:04:39.546010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:15.254 [2024-07-15 14:04:39.546029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:23:15.254 [2024-07-15 14:04:39.546043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.254 [2024-07-15 14:04:39.546194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.254 [2024-07-15 14:04:39.546215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:15.254 [2024-07-15 14:04:39.546228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:23:15.254 [2024-07-15 14:04:39.546241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.254 [2024-07-15 14:04:39.563641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.254 [2024-07-15 14:04:39.563714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:15.254 [2024-07-15 14:04:39.563736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.371 ms 00:23:15.254 [2024-07-15 14:04:39.563759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.254 [2024-07-15 14:04:39.579992] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:23:15.254 [2024-07-15 14:04:39.580038] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:15.254 [2024-07-15 14:04:39.580057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.254 [2024-07-15 14:04:39.580072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:15.254 [2024-07-15 14:04:39.580086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.153 ms 00:23:15.254 [2024-07-15 14:04:39.580099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.254 [2024-07-15 14:04:39.609951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.254 [2024-07-15 14:04:39.610000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:15.254 [2024-07-15 14:04:39.610020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.762 ms 00:23:15.254 [2024-07-15 14:04:39.610035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.254 [2024-07-15 14:04:39.625663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.254 [2024-07-15 14:04:39.625710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:15.254 [2024-07-15 14:04:39.625739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.535 ms 00:23:15.254 [2024-07-15 14:04:39.625756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.254 [2024-07-15 14:04:39.641421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.254 [2024-07-15 14:04:39.641493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:15.254 [2024-07-15 14:04:39.641513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.571 ms 00:23:15.254 [2024-07-15 14:04:39.641527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.254 [2024-07-15 14:04:39.642430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.254 [2024-07-15 14:04:39.642466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:15.254 [2024-07-15 14:04:39.642483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:23:15.254 [2024-07-15 14:04:39.642497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.254 [2024-07-15 14:04:39.724163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.254 [2024-07-15 14:04:39.724255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:15.254 [2024-07-15 14:04:39.724278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.630 ms 00:23:15.254 [2024-07-15 14:04:39.724294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.254 [2024-07-15 14:04:39.737499] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:15.254 [2024-07-15 14:04:39.751471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.254 [2024-07-15 14:04:39.751541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:15.254 [2024-07-15 14:04:39.751568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.978 ms 00:23:15.254 [2024-07-15 14:04:39.751584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.254 [2024-07-15 14:04:39.751720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.254 [2024-07-15 14:04:39.751740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:15.254 [2024-07-15 14:04:39.751756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:15.254 [2024-07-15 14:04:39.751768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.254 [2024-07-15 14:04:39.751837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.254 [2024-07-15 14:04:39.751853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:15.254 [2024-07-15 14:04:39.751868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:15.254 [2024-07-15 14:04:39.751880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.254 [2024-07-15 14:04:39.751918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.254 [2024-07-15 14:04:39.751933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:15.254 [2024-07-15 14:04:39.751950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:15.254 [2024-07-15 14:04:39.751962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.254 [2024-07-15 14:04:39.752007] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:15.254 [2024-07-15 14:04:39.752023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.254 [2024-07-15 14:04:39.752039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:15.254 [2024-07-15 14:04:39.752051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:23:15.254 [2024-07-15 14:04:39.752065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.254 [2024-07-15 14:04:39.782997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.254 [2024-07-15 14:04:39.783062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:15.254 [2024-07-15 14:04:39.783084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.901 ms 00:23:15.254 [2024-07-15 14:04:39.783099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.254 [2024-07-15 14:04:39.783250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.254 [2024-07-15 14:04:39.783276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:15.254 [2024-07-15 14:04:39.783290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:23:15.254 [2024-07-15 14:04:39.783336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.254 [2024-07-15 14:04:39.784370] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:15.254 [2024-07-15 14:04:39.788525] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 344.106 ms, result 0 00:23:15.255 [2024-07-15 14:04:39.789468] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:15.512 Some configs were skipped because the RPC state that can call them passed over. 00:23:15.512 14:04:39 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:23:15.769 [2024-07-15 14:04:40.059232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.769 [2024-07-15 14:04:40.059514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:15.769 [2024-07-15 14:04:40.059679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.364 ms 00:23:15.769 [2024-07-15 14:04:40.059735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.769 [2024-07-15 14:04:40.059879] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.018 ms, result 0 00:23:15.769 true 00:23:15.769 14:04:40 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:23:15.769 [2024-07-15 14:04:40.294969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.769 [2024-07-15 14:04:40.295209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:15.769 [2024-07-15 14:04:40.295360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.828 ms 00:23:15.769 [2024-07-15 14:04:40.295422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.769 [2024-07-15 14:04:40.295547] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.410 ms, result 0 00:23:15.769 true 00:23:15.769 14:04:40 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 81222 00:23:15.769 14:04:40 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 81222 ']' 00:23:15.769 14:04:40 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 81222 00:23:16.026 14:04:40 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:23:16.026 14:04:40 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:16.026 14:04:40 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81222 00:23:16.026 14:04:40 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:16.026 14:04:40 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:16.026 14:04:40 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81222' 00:23:16.026 killing process with pid 81222 00:23:16.026 14:04:40 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 81222 00:23:16.026 14:04:40 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 81222 00:23:16.960 [2024-07-15 14:04:41.276765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.960 [2024-07-15 14:04:41.277019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:16.960 [2024-07-15 14:04:41.277152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:16.960 [2024-07-15 14:04:41.277331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.960 [2024-07-15 14:04:41.277509] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:16.960 [2024-07-15 14:04:41.280997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.960 [2024-07-15 14:04:41.281046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:16.960 [2024-07-15 14:04:41.281064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.315 ms 00:23:16.960 [2024-07-15 14:04:41.281091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.960 [2024-07-15 14:04:41.281439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.960 [2024-07-15 14:04:41.281469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:16.960 [2024-07-15 14:04:41.281485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:23:16.960 [2024-07-15 14:04:41.281499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.960 [2024-07-15 14:04:41.285689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.960 [2024-07-15 14:04:41.285737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:16.960 [2024-07-15 14:04:41.285758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.165 ms 00:23:16.960 [2024-07-15 14:04:41.285773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.960 [2024-07-15 14:04:41.293682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.960 [2024-07-15 14:04:41.293733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:16.960 [2024-07-15 14:04:41.293752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.864 ms 00:23:16.960 [2024-07-15 14:04:41.293769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.960 [2024-07-15 14:04:41.306490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.960 [2024-07-15 14:04:41.306556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:16.960 [2024-07-15 14:04:41.306586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.626 ms 00:23:16.960 [2024-07-15 14:04:41.306603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.960 [2024-07-15 14:04:41.314947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.960 [2024-07-15 14:04:41.315015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:16.960 [2024-07-15 14:04:41.315037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.269 ms 00:23:16.960 [2024-07-15 14:04:41.315052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.960 [2024-07-15 14:04:41.315218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.960 [2024-07-15 14:04:41.315242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:16.960 [2024-07-15 14:04:41.315257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:23:16.960 [2024-07-15 14:04:41.315287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.960 [2024-07-15 14:04:41.328660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.960 [2024-07-15 14:04:41.328752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:16.960 [2024-07-15 14:04:41.328772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.320 ms 00:23:16.960 [2024-07-15 14:04:41.328787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.960 [2024-07-15 14:04:41.341554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.960 [2024-07-15 14:04:41.341606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:16.960 [2024-07-15 14:04:41.341624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.647 ms 00:23:16.960 [2024-07-15 14:04:41.341645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.960 [2024-07-15 14:04:41.353898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.960 [2024-07-15 14:04:41.353951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:16.960 [2024-07-15 14:04:41.353970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.181 ms 00:23:16.960 [2024-07-15 14:04:41.353984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.960 [2024-07-15 14:04:41.366134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.960 [2024-07-15 14:04:41.366184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:16.960 [2024-07-15 14:04:41.366201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.049 ms 00:23:16.960 [2024-07-15 14:04:41.366215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.960 [2024-07-15 14:04:41.366281] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:16.960 [2024-07-15 14:04:41.366336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.366986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.367000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.367022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:16.960 [2024-07-15 14:04:41.367035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:16.961 [2024-07-15 14:04:41.367764] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:16.961 [2024-07-15 14:04:41.367782] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 96438210-a730-46dd-94de-bf7d6eb48d99 00:23:16.961 [2024-07-15 14:04:41.367803] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:16.961 [2024-07-15 14:04:41.367815] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:16.961 [2024-07-15 14:04:41.367828] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:16.961 [2024-07-15 14:04:41.367840] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:16.961 [2024-07-15 14:04:41.367853] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:16.961 [2024-07-15 14:04:41.367865] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:16.961 [2024-07-15 14:04:41.367879] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:16.961 [2024-07-15 14:04:41.367890] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:16.961 [2024-07-15 14:04:41.367917] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:16.961 [2024-07-15 14:04:41.367929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.961 [2024-07-15 14:04:41.367943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:16.961 [2024-07-15 14:04:41.367956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.651 ms 00:23:16.961 [2024-07-15 14:04:41.367970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.961 [2024-07-15 14:04:41.384599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.961 [2024-07-15 14:04:41.384666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:16.961 [2024-07-15 14:04:41.384687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.580 ms 00:23:16.961 [2024-07-15 14:04:41.384705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.961 [2024-07-15 14:04:41.385197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.961 [2024-07-15 14:04:41.385228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:16.961 [2024-07-15 14:04:41.385248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:23:16.961 [2024-07-15 14:04:41.385267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.961 [2024-07-15 14:04:41.440280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.961 [2024-07-15 14:04:41.440371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:16.961 [2024-07-15 14:04:41.440391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.961 [2024-07-15 14:04:41.440406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.961 [2024-07-15 14:04:41.440547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.961 [2024-07-15 14:04:41.440569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:16.961 [2024-07-15 14:04:41.440583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.961 [2024-07-15 14:04:41.440601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.961 [2024-07-15 14:04:41.440676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.961 [2024-07-15 14:04:41.440700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:16.961 [2024-07-15 14:04:41.440714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.961 [2024-07-15 14:04:41.440729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.961 [2024-07-15 14:04:41.440765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.961 [2024-07-15 14:04:41.440782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:16.961 [2024-07-15 14:04:41.440794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.961 [2024-07-15 14:04:41.440807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.220 [2024-07-15 14:04:41.538866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.220 [2024-07-15 14:04:41.538942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:17.220 [2024-07-15 14:04:41.538962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.220 [2024-07-15 14:04:41.538977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.220 [2024-07-15 14:04:41.623211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.220 [2024-07-15 14:04:41.623292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:17.220 [2024-07-15 14:04:41.623330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.220 [2024-07-15 14:04:41.623348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.220 [2024-07-15 14:04:41.623457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.220 [2024-07-15 14:04:41.623485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:17.220 [2024-07-15 14:04:41.623499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.220 [2024-07-15 14:04:41.623515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.220 [2024-07-15 14:04:41.623551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.220 [2024-07-15 14:04:41.623568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:17.220 [2024-07-15 14:04:41.623580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.220 [2024-07-15 14:04:41.623594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.220 [2024-07-15 14:04:41.623723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.220 [2024-07-15 14:04:41.623747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:17.220 [2024-07-15 14:04:41.623760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.220 [2024-07-15 14:04:41.623774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.220 [2024-07-15 14:04:41.623824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.220 [2024-07-15 14:04:41.623855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:17.220 [2024-07-15 14:04:41.623869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.220 [2024-07-15 14:04:41.623883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.220 [2024-07-15 14:04:41.623932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.220 [2024-07-15 14:04:41.623954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:17.220 [2024-07-15 14:04:41.623967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.220 [2024-07-15 14:04:41.623982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.220 [2024-07-15 14:04:41.624037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.220 [2024-07-15 14:04:41.624058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:17.220 [2024-07-15 14:04:41.624070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.220 [2024-07-15 14:04:41.624084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.220 [2024-07-15 14:04:41.624244] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 347.464 ms, result 0 00:23:18.155 14:04:42 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:23:18.155 14:04:42 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:18.155 [2024-07-15 14:04:42.644640] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:23:18.155 [2024-07-15 14:04:42.644815] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81284 ] 00:23:18.414 [2024-07-15 14:04:42.812568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.698 [2024-07-15 14:04:42.996259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.956 [2024-07-15 14:04:43.303853] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:18.956 [2024-07-15 14:04:43.303928] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:18.956 [2024-07-15 14:04:43.465355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.956 [2024-07-15 14:04:43.465416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:18.956 [2024-07-15 14:04:43.465435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:18.956 [2024-07-15 14:04:43.465447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.956 [2024-07-15 14:04:43.468606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.956 [2024-07-15 14:04:43.468649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:18.956 [2024-07-15 14:04:43.468666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.129 ms 00:23:18.956 [2024-07-15 14:04:43.468677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.956 [2024-07-15 14:04:43.468795] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:18.956 [2024-07-15 14:04:43.469749] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:18.956 [2024-07-15 14:04:43.469788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.956 [2024-07-15 14:04:43.469802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:18.956 [2024-07-15 14:04:43.469814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.004 ms 00:23:18.956 [2024-07-15 14:04:43.469825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.956 [2024-07-15 14:04:43.471109] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:18.956 [2024-07-15 14:04:43.487359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.956 [2024-07-15 14:04:43.487404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:18.956 [2024-07-15 14:04:43.487428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.251 ms 00:23:18.956 [2024-07-15 14:04:43.487440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.956 [2024-07-15 14:04:43.487562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.956 [2024-07-15 14:04:43.487584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:18.956 [2024-07-15 14:04:43.487598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:18.956 [2024-07-15 14:04:43.487620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.956 [2024-07-15 14:04:43.491888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.956 [2024-07-15 14:04:43.491939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:18.956 [2024-07-15 14:04:43.491955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.210 ms 00:23:18.956 [2024-07-15 14:04:43.491967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.956 [2024-07-15 14:04:43.492095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.956 [2024-07-15 14:04:43.492127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:18.956 [2024-07-15 14:04:43.492140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:18.956 [2024-07-15 14:04:43.492151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.956 [2024-07-15 14:04:43.492193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.956 [2024-07-15 14:04:43.492209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:18.956 [2024-07-15 14:04:43.492221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:18.956 [2024-07-15 14:04:43.492235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.956 [2024-07-15 14:04:43.492267] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:18.956 [2024-07-15 14:04:43.496477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.956 [2024-07-15 14:04:43.496513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:18.956 [2024-07-15 14:04:43.496528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.219 ms 00:23:18.956 [2024-07-15 14:04:43.496540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.956 [2024-07-15 14:04:43.496609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.956 [2024-07-15 14:04:43.496627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:18.956 [2024-07-15 14:04:43.496639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:18.956 [2024-07-15 14:04:43.496650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.956 [2024-07-15 14:04:43.496682] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:18.956 [2024-07-15 14:04:43.496709] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:18.956 [2024-07-15 14:04:43.496755] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:18.956 [2024-07-15 14:04:43.496775] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:23:18.956 [2024-07-15 14:04:43.496880] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:18.956 [2024-07-15 14:04:43.496912] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:18.956 [2024-07-15 14:04:43.496929] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:18.956 [2024-07-15 14:04:43.496944] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:18.956 [2024-07-15 14:04:43.496957] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:18.956 [2024-07-15 14:04:43.496969] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:18.956 [2024-07-15 14:04:43.496985] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:18.956 [2024-07-15 14:04:43.496995] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:18.956 [2024-07-15 14:04:43.497005] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:18.956 [2024-07-15 14:04:43.497017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.956 [2024-07-15 14:04:43.497028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:18.956 [2024-07-15 14:04:43.497039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:23:18.956 [2024-07-15 14:04:43.497050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.956 [2024-07-15 14:04:43.497147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.956 [2024-07-15 14:04:43.497161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:18.956 [2024-07-15 14:04:43.497173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:18.956 [2024-07-15 14:04:43.497189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.956 [2024-07-15 14:04:43.497298] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:18.957 [2024-07-15 14:04:43.497347] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:18.957 [2024-07-15 14:04:43.497361] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:18.957 [2024-07-15 14:04:43.497372] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.957 [2024-07-15 14:04:43.497384] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:18.957 [2024-07-15 14:04:43.497397] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:18.957 [2024-07-15 14:04:43.497407] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:18.957 [2024-07-15 14:04:43.497417] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:18.957 [2024-07-15 14:04:43.497427] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:18.957 [2024-07-15 14:04:43.497437] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:18.957 [2024-07-15 14:04:43.497447] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:18.957 [2024-07-15 14:04:43.497457] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:18.957 [2024-07-15 14:04:43.497466] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:18.957 [2024-07-15 14:04:43.497477] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:18.957 [2024-07-15 14:04:43.497488] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:18.957 [2024-07-15 14:04:43.497498] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.957 [2024-07-15 14:04:43.497508] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:18.957 [2024-07-15 14:04:43.497518] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:18.957 [2024-07-15 14:04:43.497542] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.957 [2024-07-15 14:04:43.497553] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:18.957 [2024-07-15 14:04:43.497563] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:18.957 [2024-07-15 14:04:43.497573] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:18.957 [2024-07-15 14:04:43.497583] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:18.957 [2024-07-15 14:04:43.497593] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:18.957 [2024-07-15 14:04:43.497603] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:18.957 [2024-07-15 14:04:43.497613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:18.957 [2024-07-15 14:04:43.497623] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:18.957 [2024-07-15 14:04:43.497633] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:18.957 [2024-07-15 14:04:43.497643] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:18.957 [2024-07-15 14:04:43.497652] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:18.957 [2024-07-15 14:04:43.497662] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:18.957 [2024-07-15 14:04:43.497672] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:18.957 [2024-07-15 14:04:43.497682] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:18.957 [2024-07-15 14:04:43.497692] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:18.957 [2024-07-15 14:04:43.497701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:18.957 [2024-07-15 14:04:43.497711] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:18.957 [2024-07-15 14:04:43.497721] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:18.957 [2024-07-15 14:04:43.497732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:18.957 [2024-07-15 14:04:43.497742] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:18.957 [2024-07-15 14:04:43.497752] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.957 [2024-07-15 14:04:43.497761] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:18.957 [2024-07-15 14:04:43.497771] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:18.957 [2024-07-15 14:04:43.497781] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.957 [2024-07-15 14:04:43.497791] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:18.957 [2024-07-15 14:04:43.497802] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:18.957 [2024-07-15 14:04:43.497813] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:18.957 [2024-07-15 14:04:43.497823] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.957 [2024-07-15 14:04:43.497835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:18.957 [2024-07-15 14:04:43.497845] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:18.957 [2024-07-15 14:04:43.497855] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:18.957 [2024-07-15 14:04:43.497866] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:18.957 [2024-07-15 14:04:43.497875] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:18.957 [2024-07-15 14:04:43.497886] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:18.957 [2024-07-15 14:04:43.497897] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:18.957 [2024-07-15 14:04:43.497916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:18.957 [2024-07-15 14:04:43.497929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:18.957 [2024-07-15 14:04:43.497940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:18.957 [2024-07-15 14:04:43.497951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:18.957 [2024-07-15 14:04:43.497962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:18.957 [2024-07-15 14:04:43.497973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:18.957 [2024-07-15 14:04:43.497984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:18.957 [2024-07-15 14:04:43.497995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:18.957 [2024-07-15 14:04:43.498006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:18.957 [2024-07-15 14:04:43.498017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:18.957 [2024-07-15 14:04:43.498028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:18.957 [2024-07-15 14:04:43.498040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:18.957 [2024-07-15 14:04:43.498050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:18.957 [2024-07-15 14:04:43.498061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:18.957 [2024-07-15 14:04:43.498073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:18.957 [2024-07-15 14:04:43.498084] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:18.957 [2024-07-15 14:04:43.498097] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:18.957 [2024-07-15 14:04:43.498109] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:18.957 [2024-07-15 14:04:43.498121] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:18.957 [2024-07-15 14:04:43.498132] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:18.957 [2024-07-15 14:04:43.498143] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:18.957 [2024-07-15 14:04:43.498157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.957 [2024-07-15 14:04:43.498168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:18.957 [2024-07-15 14:04:43.498180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.925 ms 00:23:18.957 [2024-07-15 14:04:43.498191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.216 [2024-07-15 14:04:43.541173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.216 [2024-07-15 14:04:43.541240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:19.216 [2024-07-15 14:04:43.541260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.880 ms 00:23:19.216 [2024-07-15 14:04:43.541273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.216 [2024-07-15 14:04:43.541504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.216 [2024-07-15 14:04:43.541526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:19.216 [2024-07-15 14:04:43.541540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:19.216 [2024-07-15 14:04:43.541558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.216 [2024-07-15 14:04:43.579725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.216 [2024-07-15 14:04:43.579787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:19.216 [2024-07-15 14:04:43.579807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.133 ms 00:23:19.216 [2024-07-15 14:04:43.579818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.216 [2024-07-15 14:04:43.579954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.216 [2024-07-15 14:04:43.579973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:19.216 [2024-07-15 14:04:43.579989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:19.216 [2024-07-15 14:04:43.580000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.216 [2024-07-15 14:04:43.580332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.216 [2024-07-15 14:04:43.580359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:19.216 [2024-07-15 14:04:43.580374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:23:19.216 [2024-07-15 14:04:43.580385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.216 [2024-07-15 14:04:43.580543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.216 [2024-07-15 14:04:43.580652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:19.216 [2024-07-15 14:04:43.580666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:23:19.216 [2024-07-15 14:04:43.580677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.216 [2024-07-15 14:04:43.596821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.216 [2024-07-15 14:04:43.596875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:19.216 [2024-07-15 14:04:43.596892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.112 ms 00:23:19.216 [2024-07-15 14:04:43.596904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.216 [2024-07-15 14:04:43.613115] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:23:19.216 [2024-07-15 14:04:43.613165] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:19.216 [2024-07-15 14:04:43.613185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.216 [2024-07-15 14:04:43.613198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:19.216 [2024-07-15 14:04:43.613212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.105 ms 00:23:19.216 [2024-07-15 14:04:43.613223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.216 [2024-07-15 14:04:43.642859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.216 [2024-07-15 14:04:43.642919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:19.216 [2024-07-15 14:04:43.642939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.516 ms 00:23:19.216 [2024-07-15 14:04:43.642951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.216 [2024-07-15 14:04:43.659077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.216 [2024-07-15 14:04:43.659127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:19.216 [2024-07-15 14:04:43.659146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.014 ms 00:23:19.216 [2024-07-15 14:04:43.659158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.216 [2024-07-15 14:04:43.674616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.216 [2024-07-15 14:04:43.674659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:19.216 [2024-07-15 14:04:43.674676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.346 ms 00:23:19.216 [2024-07-15 14:04:43.674687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.216 [2024-07-15 14:04:43.675516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.216 [2024-07-15 14:04:43.675555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:19.216 [2024-07-15 14:04:43.675570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:23:19.216 [2024-07-15 14:04:43.675581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.216 [2024-07-15 14:04:43.747122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.216 [2024-07-15 14:04:43.747199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:19.216 [2024-07-15 14:04:43.747220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.504 ms 00:23:19.216 [2024-07-15 14:04:43.747232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.216 [2024-07-15 14:04:43.759852] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:19.475 [2024-07-15 14:04:43.774715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.475 [2024-07-15 14:04:43.774802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:19.475 [2024-07-15 14:04:43.774826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.307 ms 00:23:19.475 [2024-07-15 14:04:43.774838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.475 [2024-07-15 14:04:43.774984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.475 [2024-07-15 14:04:43.775004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:19.475 [2024-07-15 14:04:43.775022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:19.475 [2024-07-15 14:04:43.775033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.475 [2024-07-15 14:04:43.775102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.475 [2024-07-15 14:04:43.775118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:19.475 [2024-07-15 14:04:43.775130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:19.475 [2024-07-15 14:04:43.775141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.475 [2024-07-15 14:04:43.775174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.475 [2024-07-15 14:04:43.775188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:19.475 [2024-07-15 14:04:43.775201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:19.475 [2024-07-15 14:04:43.775217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.475 [2024-07-15 14:04:43.775254] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:19.475 [2024-07-15 14:04:43.775271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.475 [2024-07-15 14:04:43.775283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:19.475 [2024-07-15 14:04:43.775296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:23:19.475 [2024-07-15 14:04:43.775326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.475 [2024-07-15 14:04:43.807206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.475 [2024-07-15 14:04:43.807267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:19.475 [2024-07-15 14:04:43.807295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.842 ms 00:23:19.475 [2024-07-15 14:04:43.807320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.475 [2024-07-15 14:04:43.807479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.475 [2024-07-15 14:04:43.807501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:19.475 [2024-07-15 14:04:43.807515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:23:19.475 [2024-07-15 14:04:43.807527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.475 [2024-07-15 14:04:43.808574] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:19.475 [2024-07-15 14:04:43.812790] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 342.881 ms, result 0 00:23:19.475 [2024-07-15 14:04:43.813492] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:19.475 [2024-07-15 14:04:43.829891] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:29.342  Copying: 29/256 [MB] (29 MBps) Copying: 56/256 [MB] (26 MBps) Copying: 83/256 [MB] (26 MBps) Copying: 106/256 [MB] (23 MBps) Copying: 132/256 [MB] (25 MBps) Copying: 156/256 [MB] (23 MBps) Copying: 183/256 [MB] (27 MBps) Copying: 209/256 [MB] (25 MBps) Copying: 235/256 [MB] (26 MBps) Copying: 256/256 [MB] (average 26 MBps)[2024-07-15 14:04:53.660796] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:29.342 [2024-07-15 14:04:53.673069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.342 [2024-07-15 14:04:53.673115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:29.342 [2024-07-15 14:04:53.673135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:29.342 [2024-07-15 14:04:53.673147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.342 [2024-07-15 14:04:53.673179] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:29.342 [2024-07-15 14:04:53.676487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.342 [2024-07-15 14:04:53.676529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:29.342 [2024-07-15 14:04:53.676544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.287 ms 00:23:29.342 [2024-07-15 14:04:53.676556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.342 [2024-07-15 14:04:53.676839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.342 [2024-07-15 14:04:53.676864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:29.342 [2024-07-15 14:04:53.676884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:23:29.342 [2024-07-15 14:04:53.676896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.342 [2024-07-15 14:04:53.680670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.342 [2024-07-15 14:04:53.680701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:29.342 [2024-07-15 14:04:53.680715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.752 ms 00:23:29.342 [2024-07-15 14:04:53.680733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.342 [2024-07-15 14:04:53.688254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.342 [2024-07-15 14:04:53.688285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:29.342 [2024-07-15 14:04:53.688299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.496 ms 00:23:29.342 [2024-07-15 14:04:53.688321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.342 [2024-07-15 14:04:53.719396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.342 [2024-07-15 14:04:53.719442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:29.342 [2024-07-15 14:04:53.719459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.000 ms 00:23:29.342 [2024-07-15 14:04:53.719471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.342 [2024-07-15 14:04:53.737426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.342 [2024-07-15 14:04:53.737471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:29.342 [2024-07-15 14:04:53.737489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.884 ms 00:23:29.343 [2024-07-15 14:04:53.737500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.343 [2024-07-15 14:04:53.737663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.343 [2024-07-15 14:04:53.737682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:29.343 [2024-07-15 14:04:53.737695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:23:29.343 [2024-07-15 14:04:53.737706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.343 [2024-07-15 14:04:53.769030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.343 [2024-07-15 14:04:53.769082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:29.343 [2024-07-15 14:04:53.769099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.300 ms 00:23:29.343 [2024-07-15 14:04:53.769111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.343 [2024-07-15 14:04:53.800528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.343 [2024-07-15 14:04:53.800597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:29.343 [2024-07-15 14:04:53.800617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.340 ms 00:23:29.343 [2024-07-15 14:04:53.800629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.343 [2024-07-15 14:04:53.832200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.343 [2024-07-15 14:04:53.832265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:29.343 [2024-07-15 14:04:53.832285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.477 ms 00:23:29.343 [2024-07-15 14:04:53.832297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.343 [2024-07-15 14:04:53.863721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.343 [2024-07-15 14:04:53.863782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:29.343 [2024-07-15 14:04:53.863801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.278 ms 00:23:29.343 [2024-07-15 14:04:53.863813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.343 [2024-07-15 14:04:53.863886] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:29.343 [2024-07-15 14:04:53.863913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.863939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.863952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.863964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.863975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.863987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.863998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:29.343 [2024-07-15 14:04:53.864891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:29.344 [2024-07-15 14:04:53.864902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:29.344 [2024-07-15 14:04:53.864913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:29.344 [2024-07-15 14:04:53.864925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:29.344 [2024-07-15 14:04:53.864936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:29.344 [2024-07-15 14:04:53.864948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:29.344 [2024-07-15 14:04:53.864960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:29.344 [2024-07-15 14:04:53.864972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:29.344 [2024-07-15 14:04:53.864983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:29.344 [2024-07-15 14:04:53.864995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:29.344 [2024-07-15 14:04:53.865007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:29.344 [2024-07-15 14:04:53.865022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:29.344 [2024-07-15 14:04:53.865043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:29.344 [2024-07-15 14:04:53.865060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:29.344 [2024-07-15 14:04:53.865073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:29.344 [2024-07-15 14:04:53.865084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:29.344 [2024-07-15 14:04:53.865101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:29.344 [2024-07-15 14:04:53.865113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:29.344 [2024-07-15 14:04:53.865124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:29.344 [2024-07-15 14:04:53.865136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:29.344 [2024-07-15 14:04:53.865148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:29.344 [2024-07-15 14:04:53.865159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:29.344 [2024-07-15 14:04:53.865181] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:29.344 [2024-07-15 14:04:53.865194] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 96438210-a730-46dd-94de-bf7d6eb48d99 00:23:29.344 [2024-07-15 14:04:53.865206] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:29.344 [2024-07-15 14:04:53.865217] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:29.344 [2024-07-15 14:04:53.865243] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:29.344 [2024-07-15 14:04:53.865254] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:29.344 [2024-07-15 14:04:53.865265] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:29.344 [2024-07-15 14:04:53.865277] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:29.344 [2024-07-15 14:04:53.865288] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:29.344 [2024-07-15 14:04:53.865297] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:29.344 [2024-07-15 14:04:53.865322] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:29.344 [2024-07-15 14:04:53.865335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.344 [2024-07-15 14:04:53.865347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:29.344 [2024-07-15 14:04:53.865359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.450 ms 00:23:29.344 [2024-07-15 14:04:53.865376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.344 [2024-07-15 14:04:53.881949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.344 [2024-07-15 14:04:53.881994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:29.344 [2024-07-15 14:04:53.882012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.544 ms 00:23:29.344 [2024-07-15 14:04:53.882024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.344 [2024-07-15 14:04:53.882511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.344 [2024-07-15 14:04:53.882541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:29.344 [2024-07-15 14:04:53.882564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:23:29.344 [2024-07-15 14:04:53.882576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.602 [2024-07-15 14:04:53.922443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.602 [2024-07-15 14:04:53.922511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:29.602 [2024-07-15 14:04:53.922529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.602 [2024-07-15 14:04:53.922541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.602 [2024-07-15 14:04:53.922652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.602 [2024-07-15 14:04:53.922670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:29.602 [2024-07-15 14:04:53.922690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.602 [2024-07-15 14:04:53.922701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.602 [2024-07-15 14:04:53.922767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.602 [2024-07-15 14:04:53.922786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:29.602 [2024-07-15 14:04:53.922799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.602 [2024-07-15 14:04:53.922810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.602 [2024-07-15 14:04:53.922835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.602 [2024-07-15 14:04:53.922849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:29.602 [2024-07-15 14:04:53.922860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.602 [2024-07-15 14:04:53.922877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.602 [2024-07-15 14:04:54.021582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.602 [2024-07-15 14:04:54.021669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:29.602 [2024-07-15 14:04:54.021690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.602 [2024-07-15 14:04:54.021702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.602 [2024-07-15 14:04:54.126757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.602 [2024-07-15 14:04:54.126860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:29.602 [2024-07-15 14:04:54.126894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.602 [2024-07-15 14:04:54.126930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.602 [2024-07-15 14:04:54.127051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.602 [2024-07-15 14:04:54.127082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:29.602 [2024-07-15 14:04:54.127106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.602 [2024-07-15 14:04:54.127126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.602 [2024-07-15 14:04:54.127182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.602 [2024-07-15 14:04:54.127207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:29.603 [2024-07-15 14:04:54.127230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.603 [2024-07-15 14:04:54.127250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.603 [2024-07-15 14:04:54.127469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.603 [2024-07-15 14:04:54.127510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:29.603 [2024-07-15 14:04:54.127536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.603 [2024-07-15 14:04:54.127557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.603 [2024-07-15 14:04:54.127642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.603 [2024-07-15 14:04:54.127672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:29.603 [2024-07-15 14:04:54.127694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.603 [2024-07-15 14:04:54.127715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.603 [2024-07-15 14:04:54.127795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.603 [2024-07-15 14:04:54.127823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:29.603 [2024-07-15 14:04:54.127846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.603 [2024-07-15 14:04:54.127866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.603 [2024-07-15 14:04:54.127950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.603 [2024-07-15 14:04:54.127978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:29.603 [2024-07-15 14:04:54.127999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.603 [2024-07-15 14:04:54.128016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.603 [2024-07-15 14:04:54.128283] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 455.187 ms, result 0 00:23:30.977 00:23:30.977 00:23:30.977 14:04:55 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:23:30.977 14:04:55 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:23:31.543 14:04:55 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:31.543 [2024-07-15 14:04:55.882321] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:23:31.543 [2024-07-15 14:04:55.882483] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81422 ] 00:23:31.543 [2024-07-15 14:04:56.045785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.802 [2024-07-15 14:04:56.269457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.071 [2024-07-15 14:04:56.576921] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:32.071 [2024-07-15 14:04:56.576999] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:32.330 [2024-07-15 14:04:56.737484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.330 [2024-07-15 14:04:56.737550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:32.330 [2024-07-15 14:04:56.737570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:32.330 [2024-07-15 14:04:56.737584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.330 [2024-07-15 14:04:56.740742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.330 [2024-07-15 14:04:56.740785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:32.330 [2024-07-15 14:04:56.740802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.129 ms 00:23:32.330 [2024-07-15 14:04:56.740813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.330 [2024-07-15 14:04:56.740975] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:32.330 [2024-07-15 14:04:56.741928] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:32.330 [2024-07-15 14:04:56.741976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.330 [2024-07-15 14:04:56.741990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:32.330 [2024-07-15 14:04:56.742003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.021 ms 00:23:32.330 [2024-07-15 14:04:56.742015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.330 [2024-07-15 14:04:56.743213] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:32.330 [2024-07-15 14:04:56.759291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.330 [2024-07-15 14:04:56.759345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:32.330 [2024-07-15 14:04:56.759368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.078 ms 00:23:32.330 [2024-07-15 14:04:56.759381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.330 [2024-07-15 14:04:56.759500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.330 [2024-07-15 14:04:56.759522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:32.330 [2024-07-15 14:04:56.759536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:32.330 [2024-07-15 14:04:56.759547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.330 [2024-07-15 14:04:56.763847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.330 [2024-07-15 14:04:56.763899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:32.330 [2024-07-15 14:04:56.763915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.238 ms 00:23:32.330 [2024-07-15 14:04:56.763927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.330 [2024-07-15 14:04:56.764065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.330 [2024-07-15 14:04:56.764086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:32.330 [2024-07-15 14:04:56.764100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:23:32.330 [2024-07-15 14:04:56.764111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.330 [2024-07-15 14:04:56.764156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.330 [2024-07-15 14:04:56.764174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:32.330 [2024-07-15 14:04:56.764187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:32.330 [2024-07-15 14:04:56.764203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.330 [2024-07-15 14:04:56.764238] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:32.330 [2024-07-15 14:04:56.768474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.330 [2024-07-15 14:04:56.768511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:32.330 [2024-07-15 14:04:56.768527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.247 ms 00:23:32.330 [2024-07-15 14:04:56.768538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.330 [2024-07-15 14:04:56.768607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.330 [2024-07-15 14:04:56.768626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:32.330 [2024-07-15 14:04:56.768639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:32.330 [2024-07-15 14:04:56.768651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.330 [2024-07-15 14:04:56.768684] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:32.330 [2024-07-15 14:04:56.768715] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:32.330 [2024-07-15 14:04:56.768761] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:32.330 [2024-07-15 14:04:56.768782] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:23:32.330 [2024-07-15 14:04:56.768888] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:32.330 [2024-07-15 14:04:56.768903] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:32.330 [2024-07-15 14:04:56.768918] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:32.330 [2024-07-15 14:04:56.768932] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:32.330 [2024-07-15 14:04:56.768946] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:32.330 [2024-07-15 14:04:56.768958] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:32.330 [2024-07-15 14:04:56.768974] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:32.330 [2024-07-15 14:04:56.768985] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:32.330 [2024-07-15 14:04:56.768996] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:32.330 [2024-07-15 14:04:56.769008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.330 [2024-07-15 14:04:56.769019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:32.330 [2024-07-15 14:04:56.769031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:23:32.330 [2024-07-15 14:04:56.769042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.330 [2024-07-15 14:04:56.769140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.330 [2024-07-15 14:04:56.769157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:32.330 [2024-07-15 14:04:56.769171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:32.330 [2024-07-15 14:04:56.769187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.330 [2024-07-15 14:04:56.769298] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:32.330 [2024-07-15 14:04:56.769341] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:32.330 [2024-07-15 14:04:56.769354] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:32.330 [2024-07-15 14:04:56.769366] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:32.331 [2024-07-15 14:04:56.769377] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:32.331 [2024-07-15 14:04:56.769389] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:32.331 [2024-07-15 14:04:56.769400] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:32.331 [2024-07-15 14:04:56.769411] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:32.331 [2024-07-15 14:04:56.769421] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:32.331 [2024-07-15 14:04:56.769432] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:32.331 [2024-07-15 14:04:56.769442] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:32.331 [2024-07-15 14:04:56.769453] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:32.331 [2024-07-15 14:04:56.769463] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:32.331 [2024-07-15 14:04:56.769473] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:32.331 [2024-07-15 14:04:56.769484] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:32.331 [2024-07-15 14:04:56.769494] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:32.331 [2024-07-15 14:04:56.769505] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:32.331 [2024-07-15 14:04:56.769515] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:32.331 [2024-07-15 14:04:56.769540] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:32.331 [2024-07-15 14:04:56.769551] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:32.331 [2024-07-15 14:04:56.769562] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:32.331 [2024-07-15 14:04:56.769572] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:32.331 [2024-07-15 14:04:56.769582] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:32.331 [2024-07-15 14:04:56.769593] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:32.331 [2024-07-15 14:04:56.769603] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:32.331 [2024-07-15 14:04:56.769613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:32.331 [2024-07-15 14:04:56.769623] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:32.331 [2024-07-15 14:04:56.769634] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:32.331 [2024-07-15 14:04:56.769644] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:32.331 [2024-07-15 14:04:56.769654] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:32.331 [2024-07-15 14:04:56.769664] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:32.331 [2024-07-15 14:04:56.769675] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:32.331 [2024-07-15 14:04:56.769685] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:32.331 [2024-07-15 14:04:56.769695] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:32.331 [2024-07-15 14:04:56.769705] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:32.331 [2024-07-15 14:04:56.769716] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:32.331 [2024-07-15 14:04:56.769726] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:32.331 [2024-07-15 14:04:56.769737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:32.331 [2024-07-15 14:04:56.769748] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:32.331 [2024-07-15 14:04:56.769758] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:32.331 [2024-07-15 14:04:56.769768] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:32.331 [2024-07-15 14:04:56.769779] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:32.331 [2024-07-15 14:04:56.769789] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:32.331 [2024-07-15 14:04:56.769798] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:32.331 [2024-07-15 14:04:56.769810] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:32.331 [2024-07-15 14:04:56.769821] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:32.331 [2024-07-15 14:04:56.769831] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:32.331 [2024-07-15 14:04:56.769843] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:32.331 [2024-07-15 14:04:56.769853] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:32.331 [2024-07-15 14:04:56.769864] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:32.331 [2024-07-15 14:04:56.769874] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:32.331 [2024-07-15 14:04:56.769885] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:32.331 [2024-07-15 14:04:56.769895] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:32.331 [2024-07-15 14:04:56.769907] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:32.331 [2024-07-15 14:04:56.769925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:32.331 [2024-07-15 14:04:56.769938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:32.331 [2024-07-15 14:04:56.769949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:32.331 [2024-07-15 14:04:56.769961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:32.331 [2024-07-15 14:04:56.769972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:32.331 [2024-07-15 14:04:56.769983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:32.331 [2024-07-15 14:04:56.769994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:32.331 [2024-07-15 14:04:56.770005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:32.331 [2024-07-15 14:04:56.770017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:32.331 [2024-07-15 14:04:56.770028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:32.331 [2024-07-15 14:04:56.770039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:32.331 [2024-07-15 14:04:56.770051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:32.331 [2024-07-15 14:04:56.770062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:32.331 [2024-07-15 14:04:56.770073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:32.331 [2024-07-15 14:04:56.770085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:32.331 [2024-07-15 14:04:56.770100] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:32.331 [2024-07-15 14:04:56.770113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:32.331 [2024-07-15 14:04:56.770125] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:32.331 [2024-07-15 14:04:56.770137] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:32.331 [2024-07-15 14:04:56.770149] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:32.331 [2024-07-15 14:04:56.770160] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:32.331 [2024-07-15 14:04:56.770172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.331 [2024-07-15 14:04:56.770184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:32.331 [2024-07-15 14:04:56.770196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.940 ms 00:23:32.331 [2024-07-15 14:04:56.770207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.331 [2024-07-15 14:04:56.813705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.331 [2024-07-15 14:04:56.813768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:32.331 [2024-07-15 14:04:56.813795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.385 ms 00:23:32.331 [2024-07-15 14:04:56.813817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.331 [2024-07-15 14:04:56.814047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.331 [2024-07-15 14:04:56.814069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:32.331 [2024-07-15 14:04:56.814084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:32.331 [2024-07-15 14:04:56.814109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.331 [2024-07-15 14:04:56.859051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.331 [2024-07-15 14:04:56.859117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:32.331 [2024-07-15 14:04:56.859147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.891 ms 00:23:32.331 [2024-07-15 14:04:56.859168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.331 [2024-07-15 14:04:56.859354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.331 [2024-07-15 14:04:56.859399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:32.331 [2024-07-15 14:04:56.859425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:32.331 [2024-07-15 14:04:56.859447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.331 [2024-07-15 14:04:56.859801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.331 [2024-07-15 14:04:56.859831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:32.331 [2024-07-15 14:04:56.859845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:23:32.331 [2024-07-15 14:04:56.859857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.331 [2024-07-15 14:04:56.860064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.331 [2024-07-15 14:04:56.860098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:32.331 [2024-07-15 14:04:56.860113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:23:32.331 [2024-07-15 14:04:56.860124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.590 [2024-07-15 14:04:56.879347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.590 [2024-07-15 14:04:56.879422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:32.590 [2024-07-15 14:04:56.879453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.187 ms 00:23:32.590 [2024-07-15 14:04:56.879470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.590 [2024-07-15 14:04:56.898633] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:23:32.590 [2024-07-15 14:04:56.898709] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:32.591 [2024-07-15 14:04:56.898735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.591 [2024-07-15 14:04:56.898758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:32.591 [2024-07-15 14:04:56.898782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.063 ms 00:23:32.591 [2024-07-15 14:04:56.898800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.591 [2024-07-15 14:04:56.933501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.591 [2024-07-15 14:04:56.933587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:32.591 [2024-07-15 14:04:56.933617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.546 ms 00:23:32.591 [2024-07-15 14:04:56.933631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.591 [2024-07-15 14:04:56.952463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.591 [2024-07-15 14:04:56.952529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:32.591 [2024-07-15 14:04:56.952560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.649 ms 00:23:32.591 [2024-07-15 14:04:56.952577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.591 [2024-07-15 14:04:56.970808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.591 [2024-07-15 14:04:56.970872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:32.591 [2024-07-15 14:04:56.970900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.108 ms 00:23:32.591 [2024-07-15 14:04:56.970921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.591 [2024-07-15 14:04:56.971939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.591 [2024-07-15 14:04:56.971991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:32.591 [2024-07-15 14:04:56.972010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.840 ms 00:23:32.591 [2024-07-15 14:04:56.972023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.591 [2024-07-15 14:04:57.048320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.591 [2024-07-15 14:04:57.048405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:32.591 [2024-07-15 14:04:57.048426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.237 ms 00:23:32.591 [2024-07-15 14:04:57.048439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.591 [2024-07-15 14:04:57.061424] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:32.591 [2024-07-15 14:04:57.075551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.591 [2024-07-15 14:04:57.075623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:32.591 [2024-07-15 14:04:57.075644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.942 ms 00:23:32.591 [2024-07-15 14:04:57.075656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.591 [2024-07-15 14:04:57.075800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.591 [2024-07-15 14:04:57.075825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:32.591 [2024-07-15 14:04:57.075839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:32.591 [2024-07-15 14:04:57.075851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.591 [2024-07-15 14:04:57.075920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.591 [2024-07-15 14:04:57.075939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:32.591 [2024-07-15 14:04:57.075952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:32.591 [2024-07-15 14:04:57.075964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.591 [2024-07-15 14:04:57.076005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.591 [2024-07-15 14:04:57.076021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:32.591 [2024-07-15 14:04:57.076040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:32.591 [2024-07-15 14:04:57.076051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.591 [2024-07-15 14:04:57.076090] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:32.591 [2024-07-15 14:04:57.076107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.591 [2024-07-15 14:04:57.076118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:32.591 [2024-07-15 14:04:57.076130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:23:32.591 [2024-07-15 14:04:57.076141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.591 [2024-07-15 14:04:57.107263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.591 [2024-07-15 14:04:57.107350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:32.591 [2024-07-15 14:04:57.107371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.089 ms 00:23:32.591 [2024-07-15 14:04:57.107383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.591 [2024-07-15 14:04:57.107518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.591 [2024-07-15 14:04:57.107540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:32.591 [2024-07-15 14:04:57.107553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:32.591 [2024-07-15 14:04:57.107565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.591 [2024-07-15 14:04:57.108578] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:32.591 [2024-07-15 14:04:57.112677] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 370.754 ms, result 0 00:23:32.591 [2024-07-15 14:04:57.113473] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:32.591 [2024-07-15 14:04:57.129822] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:32.851  Copying: 4096/4096 [kB] (average 28 MBps)[2024-07-15 14:04:57.272580] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:32.851 [2024-07-15 14:04:57.284869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.851 [2024-07-15 14:04:57.284916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:32.851 [2024-07-15 14:04:57.284936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:32.851 [2024-07-15 14:04:57.284949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.851 [2024-07-15 14:04:57.284989] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:32.851 [2024-07-15 14:04:57.288297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.851 [2024-07-15 14:04:57.288339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:32.851 [2024-07-15 14:04:57.288354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.286 ms 00:23:32.851 [2024-07-15 14:04:57.288366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.851 [2024-07-15 14:04:57.289968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.851 [2024-07-15 14:04:57.290021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:32.851 [2024-07-15 14:04:57.290047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.570 ms 00:23:32.851 [2024-07-15 14:04:57.290060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.851 [2024-07-15 14:04:57.294069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.851 [2024-07-15 14:04:57.294112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:32.851 [2024-07-15 14:04:57.294135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.982 ms 00:23:32.851 [2024-07-15 14:04:57.294147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.851 [2024-07-15 14:04:57.301745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.851 [2024-07-15 14:04:57.301784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:32.851 [2024-07-15 14:04:57.301801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.555 ms 00:23:32.851 [2024-07-15 14:04:57.301820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.851 [2024-07-15 14:04:57.332931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.851 [2024-07-15 14:04:57.332977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:32.851 [2024-07-15 14:04:57.332995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.031 ms 00:23:32.851 [2024-07-15 14:04:57.333007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.851 [2024-07-15 14:04:57.350594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.851 [2024-07-15 14:04:57.350655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:32.851 [2024-07-15 14:04:57.350673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.516 ms 00:23:32.851 [2024-07-15 14:04:57.350695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.851 [2024-07-15 14:04:57.350869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.851 [2024-07-15 14:04:57.350901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:32.851 [2024-07-15 14:04:57.350916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:23:32.851 [2024-07-15 14:04:57.350928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.851 [2024-07-15 14:04:57.382005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.851 [2024-07-15 14:04:57.382049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:32.851 [2024-07-15 14:04:57.382066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.052 ms 00:23:32.851 [2024-07-15 14:04:57.382078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.110 [2024-07-15 14:04:57.412886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.110 [2024-07-15 14:04:57.412937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:33.110 [2024-07-15 14:04:57.412955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.739 ms 00:23:33.110 [2024-07-15 14:04:57.412966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.110 [2024-07-15 14:04:57.443544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.110 [2024-07-15 14:04:57.443590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:33.110 [2024-07-15 14:04:57.443607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.499 ms 00:23:33.110 [2024-07-15 14:04:57.443618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.110 [2024-07-15 14:04:57.474160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.110 [2024-07-15 14:04:57.474205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:33.110 [2024-07-15 14:04:57.474222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.438 ms 00:23:33.110 [2024-07-15 14:04:57.474234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.110 [2024-07-15 14:04:57.474316] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:33.110 [2024-07-15 14:04:57.474367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:33.110 [2024-07-15 14:04:57.474383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:33.110 [2024-07-15 14:04:57.474396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:33.110 [2024-07-15 14:04:57.474409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:33.110 [2024-07-15 14:04:57.474421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.474990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:33.111 [2024-07-15 14:04:57.475629] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:33.112 [2024-07-15 14:04:57.475641] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 96438210-a730-46dd-94de-bf7d6eb48d99 00:23:33.112 [2024-07-15 14:04:57.475653] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:33.112 [2024-07-15 14:04:57.475665] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:33.112 [2024-07-15 14:04:57.475689] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:33.112 [2024-07-15 14:04:57.475701] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:33.112 [2024-07-15 14:04:57.475712] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:33.112 [2024-07-15 14:04:57.475723] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:33.112 [2024-07-15 14:04:57.475734] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:33.112 [2024-07-15 14:04:57.475744] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:33.112 [2024-07-15 14:04:57.475754] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:33.112 [2024-07-15 14:04:57.475765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.112 [2024-07-15 14:04:57.475777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:33.112 [2024-07-15 14:04:57.475794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.464 ms 00:23:33.112 [2024-07-15 14:04:57.475805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.112 [2024-07-15 14:04:57.492297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.112 [2024-07-15 14:04:57.492356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:33.112 [2024-07-15 14:04:57.492373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.464 ms 00:23:33.112 [2024-07-15 14:04:57.492385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.112 [2024-07-15 14:04:57.492840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.112 [2024-07-15 14:04:57.492875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:33.112 [2024-07-15 14:04:57.492890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:23:33.112 [2024-07-15 14:04:57.492901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.112 [2024-07-15 14:04:57.533499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.112 [2024-07-15 14:04:57.533581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:33.112 [2024-07-15 14:04:57.533602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.112 [2024-07-15 14:04:57.533615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.112 [2024-07-15 14:04:57.533741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.112 [2024-07-15 14:04:57.533767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:33.112 [2024-07-15 14:04:57.533780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.112 [2024-07-15 14:04:57.533791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.112 [2024-07-15 14:04:57.533862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.112 [2024-07-15 14:04:57.533880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:33.112 [2024-07-15 14:04:57.533893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.112 [2024-07-15 14:04:57.533904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.112 [2024-07-15 14:04:57.533929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.112 [2024-07-15 14:04:57.533943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:33.112 [2024-07-15 14:04:57.533962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.112 [2024-07-15 14:04:57.533973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.112 [2024-07-15 14:04:57.632409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.112 [2024-07-15 14:04:57.632479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:33.112 [2024-07-15 14:04:57.632498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.112 [2024-07-15 14:04:57.632510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.370 [2024-07-15 14:04:57.716394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.370 [2024-07-15 14:04:57.716481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:33.370 [2024-07-15 14:04:57.716501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.370 [2024-07-15 14:04:57.716514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.370 [2024-07-15 14:04:57.716603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.370 [2024-07-15 14:04:57.716622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:33.370 [2024-07-15 14:04:57.716634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.370 [2024-07-15 14:04:57.716646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.370 [2024-07-15 14:04:57.716681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.370 [2024-07-15 14:04:57.716696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:33.370 [2024-07-15 14:04:57.716707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.370 [2024-07-15 14:04:57.716724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.370 [2024-07-15 14:04:57.716853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.370 [2024-07-15 14:04:57.716872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:33.370 [2024-07-15 14:04:57.716885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.370 [2024-07-15 14:04:57.716896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.370 [2024-07-15 14:04:57.716945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.370 [2024-07-15 14:04:57.716962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:33.370 [2024-07-15 14:04:57.716974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.370 [2024-07-15 14:04:57.716985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.370 [2024-07-15 14:04:57.717037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.370 [2024-07-15 14:04:57.717053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:33.370 [2024-07-15 14:04:57.717066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.370 [2024-07-15 14:04:57.717077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.370 [2024-07-15 14:04:57.717131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.370 [2024-07-15 14:04:57.717149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:33.370 [2024-07-15 14:04:57.717161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.371 [2024-07-15 14:04:57.717177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.371 [2024-07-15 14:04:57.717372] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 432.469 ms, result 0 00:23:34.305 00:23:34.305 00:23:34.305 14:04:58 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=81454 00:23:34.305 14:04:58 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:23:34.305 14:04:58 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 81454 00:23:34.305 14:04:58 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 81454 ']' 00:23:34.305 14:04:58 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.305 14:04:58 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:34.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.305 14:04:58 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.305 14:04:58 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:34.305 14:04:58 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:34.562 [2024-07-15 14:04:58.912004] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:23:34.562 [2024-07-15 14:04:58.912159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81454 ] 00:23:34.562 [2024-07-15 14:04:59.075473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.821 [2024-07-15 14:04:59.261243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.755 14:04:59 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:35.755 14:04:59 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:23:35.755 14:04:59 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:23:35.755 [2024-07-15 14:05:00.243223] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:35.755 [2024-07-15 14:05:00.243324] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:36.015 [2024-07-15 14:05:00.405012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.015 [2024-07-15 14:05:00.405088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:36.015 [2024-07-15 14:05:00.405109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:36.015 [2024-07-15 14:05:00.405124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.015 [2024-07-15 14:05:00.408341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.015 [2024-07-15 14:05:00.408390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:36.015 [2024-07-15 14:05:00.408407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.189 ms 00:23:36.015 [2024-07-15 14:05:00.408422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.015 [2024-07-15 14:05:00.408718] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:36.015 [2024-07-15 14:05:00.409749] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:36.015 [2024-07-15 14:05:00.409791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.015 [2024-07-15 14:05:00.409809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:36.015 [2024-07-15 14:05:00.409822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.084 ms 00:23:36.015 [2024-07-15 14:05:00.409836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.015 [2024-07-15 14:05:00.411075] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:36.015 [2024-07-15 14:05:00.427193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.015 [2024-07-15 14:05:00.427239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:36.015 [2024-07-15 14:05:00.427261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.115 ms 00:23:36.015 [2024-07-15 14:05:00.427274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.015 [2024-07-15 14:05:00.427409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.015 [2024-07-15 14:05:00.427432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:36.015 [2024-07-15 14:05:00.427448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:36.015 [2024-07-15 14:05:00.427460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.015 [2024-07-15 14:05:00.431948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.015 [2024-07-15 14:05:00.432000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:36.015 [2024-07-15 14:05:00.432026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.403 ms 00:23:36.015 [2024-07-15 14:05:00.432038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.015 [2024-07-15 14:05:00.432187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.015 [2024-07-15 14:05:00.432209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:36.015 [2024-07-15 14:05:00.432224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:23:36.015 [2024-07-15 14:05:00.432236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.015 [2024-07-15 14:05:00.432328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.015 [2024-07-15 14:05:00.432352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:36.015 [2024-07-15 14:05:00.432367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:23:36.015 [2024-07-15 14:05:00.432379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.015 [2024-07-15 14:05:00.432429] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:36.015 [2024-07-15 14:05:00.436720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.015 [2024-07-15 14:05:00.436762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:36.015 [2024-07-15 14:05:00.436778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.306 ms 00:23:36.015 [2024-07-15 14:05:00.436791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.015 [2024-07-15 14:05:00.436861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.015 [2024-07-15 14:05:00.436885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:36.015 [2024-07-15 14:05:00.436899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:36.015 [2024-07-15 14:05:00.436915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.015 [2024-07-15 14:05:00.436945] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:36.015 [2024-07-15 14:05:00.436972] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:36.015 [2024-07-15 14:05:00.437032] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:36.015 [2024-07-15 14:05:00.437076] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:23:36.015 [2024-07-15 14:05:00.437199] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:36.015 [2024-07-15 14:05:00.437226] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:36.015 [2024-07-15 14:05:00.437250] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:36.015 [2024-07-15 14:05:00.437268] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:36.015 [2024-07-15 14:05:00.437283] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:36.015 [2024-07-15 14:05:00.437326] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:36.015 [2024-07-15 14:05:00.437343] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:36.015 [2024-07-15 14:05:00.437356] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:36.015 [2024-07-15 14:05:00.437368] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:36.015 [2024-07-15 14:05:00.437386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.015 [2024-07-15 14:05:00.437398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:36.015 [2024-07-15 14:05:00.437418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:23:36.015 [2024-07-15 14:05:00.437434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.015 [2024-07-15 14:05:00.437590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.015 [2024-07-15 14:05:00.437611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:36.015 [2024-07-15 14:05:00.437626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:23:36.015 [2024-07-15 14:05:00.437643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.015 [2024-07-15 14:05:00.437783] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:36.015 [2024-07-15 14:05:00.437823] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:36.015 [2024-07-15 14:05:00.437851] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:36.015 [2024-07-15 14:05:00.437871] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.015 [2024-07-15 14:05:00.437887] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:36.015 [2024-07-15 14:05:00.437898] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:36.015 [2024-07-15 14:05:00.437913] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:36.015 [2024-07-15 14:05:00.437925] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:36.015 [2024-07-15 14:05:00.437959] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:36.015 [2024-07-15 14:05:00.437975] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:36.015 [2024-07-15 14:05:00.437999] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:36.015 [2024-07-15 14:05:00.438021] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:36.015 [2024-07-15 14:05:00.438040] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:36.015 [2024-07-15 14:05:00.438053] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:36.015 [2024-07-15 14:05:00.438066] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:36.015 [2024-07-15 14:05:00.438076] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.015 [2024-07-15 14:05:00.438089] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:36.015 [2024-07-15 14:05:00.438099] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:36.015 [2024-07-15 14:05:00.438117] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.015 [2024-07-15 14:05:00.438137] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:36.015 [2024-07-15 14:05:00.438152] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:36.015 [2024-07-15 14:05:00.438164] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.015 [2024-07-15 14:05:00.438176] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:36.015 [2024-07-15 14:05:00.438187] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:36.015 [2024-07-15 14:05:00.438201] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.015 [2024-07-15 14:05:00.438217] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:36.015 [2024-07-15 14:05:00.438231] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:36.015 [2024-07-15 14:05:00.438263] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.015 [2024-07-15 14:05:00.438282] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:36.015 [2024-07-15 14:05:00.438293] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:36.015 [2024-07-15 14:05:00.438351] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.015 [2024-07-15 14:05:00.438373] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:36.015 [2024-07-15 14:05:00.438388] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:36.015 [2024-07-15 14:05:00.438399] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:36.015 [2024-07-15 14:05:00.438411] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:36.015 [2024-07-15 14:05:00.438423] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:36.015 [2024-07-15 14:05:00.438435] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:36.015 [2024-07-15 14:05:00.438452] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:36.015 [2024-07-15 14:05:00.438470] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:36.015 [2024-07-15 14:05:00.438482] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.015 [2024-07-15 14:05:00.438499] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:36.015 [2024-07-15 14:05:00.438513] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:36.015 [2024-07-15 14:05:00.438529] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.015 [2024-07-15 14:05:00.438540] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:36.015 [2024-07-15 14:05:00.438558] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:36.015 [2024-07-15 14:05:00.438577] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:36.015 [2024-07-15 14:05:00.438601] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.015 [2024-07-15 14:05:00.438619] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:36.015 [2024-07-15 14:05:00.438633] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:36.015 [2024-07-15 14:05:00.438650] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:36.015 [2024-07-15 14:05:00.438672] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:36.015 [2024-07-15 14:05:00.438691] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:36.015 [2024-07-15 14:05:00.438706] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:36.015 [2024-07-15 14:05:00.438720] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:36.015 [2024-07-15 14:05:00.438745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:36.015 [2024-07-15 14:05:00.438760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:36.015 [2024-07-15 14:05:00.438778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:36.015 [2024-07-15 14:05:00.438795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:36.015 [2024-07-15 14:05:00.438809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:36.015 [2024-07-15 14:05:00.438821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:36.015 [2024-07-15 14:05:00.438834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:36.015 [2024-07-15 14:05:00.438846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:36.015 [2024-07-15 14:05:00.438862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:36.015 [2024-07-15 14:05:00.438882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:36.015 [2024-07-15 14:05:00.438906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:36.015 [2024-07-15 14:05:00.438925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:36.015 [2024-07-15 14:05:00.438945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:36.015 [2024-07-15 14:05:00.438957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:36.015 [2024-07-15 14:05:00.438971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:36.015 [2024-07-15 14:05:00.438983] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:36.015 [2024-07-15 14:05:00.438998] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:36.015 [2024-07-15 14:05:00.439017] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:36.015 [2024-07-15 14:05:00.439035] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:36.015 [2024-07-15 14:05:00.439047] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:36.015 [2024-07-15 14:05:00.439069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:36.015 [2024-07-15 14:05:00.439094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.015 [2024-07-15 14:05:00.439116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:36.015 [2024-07-15 14:05:00.439129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.384 ms 00:23:36.015 [2024-07-15 14:05:00.439143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.015 [2024-07-15 14:05:00.471711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.015 [2024-07-15 14:05:00.471780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:36.015 [2024-07-15 14:05:00.471801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.475 ms 00:23:36.015 [2024-07-15 14:05:00.471819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.015 [2024-07-15 14:05:00.472010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.015 [2024-07-15 14:05:00.472034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:36.015 [2024-07-15 14:05:00.472047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:36.015 [2024-07-15 14:05:00.472062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.015 [2024-07-15 14:05:00.510284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.015 [2024-07-15 14:05:00.510386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:36.015 [2024-07-15 14:05:00.510408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.179 ms 00:23:36.015 [2024-07-15 14:05:00.510423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.015 [2024-07-15 14:05:00.510552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.015 [2024-07-15 14:05:00.510574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:36.015 [2024-07-15 14:05:00.510588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:36.015 [2024-07-15 14:05:00.510602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.015 [2024-07-15 14:05:00.510955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.015 [2024-07-15 14:05:00.510997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:36.015 [2024-07-15 14:05:00.511018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:23:36.015 [2024-07-15 14:05:00.511032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.015 [2024-07-15 14:05:00.511218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.015 [2024-07-15 14:05:00.511253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:36.015 [2024-07-15 14:05:00.511274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:23:36.015 [2024-07-15 14:05:00.511292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.015 [2024-07-15 14:05:00.528823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.015 [2024-07-15 14:05:00.528882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:36.015 [2024-07-15 14:05:00.528901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.477 ms 00:23:36.015 [2024-07-15 14:05:00.528916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.015 [2024-07-15 14:05:00.545227] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:36.015 [2024-07-15 14:05:00.545277] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:36.015 [2024-07-15 14:05:00.545297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.015 [2024-07-15 14:05:00.545325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:36.015 [2024-07-15 14:05:00.545341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.222 ms 00:23:36.015 [2024-07-15 14:05:00.545355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.273 [2024-07-15 14:05:00.575101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.273 [2024-07-15 14:05:00.575176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:36.273 [2024-07-15 14:05:00.575197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.644 ms 00:23:36.273 [2024-07-15 14:05:00.575212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.273 [2024-07-15 14:05:00.590995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.273 [2024-07-15 14:05:00.591044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:36.273 [2024-07-15 14:05:00.591073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.650 ms 00:23:36.273 [2024-07-15 14:05:00.591091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.274 [2024-07-15 14:05:00.606545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.274 [2024-07-15 14:05:00.606593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:36.274 [2024-07-15 14:05:00.606610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.360 ms 00:23:36.274 [2024-07-15 14:05:00.606624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.274 [2024-07-15 14:05:00.607484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.274 [2024-07-15 14:05:00.607524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:36.274 [2024-07-15 14:05:00.607540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.732 ms 00:23:36.274 [2024-07-15 14:05:00.607554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.274 [2024-07-15 14:05:00.692011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.274 [2024-07-15 14:05:00.692089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:36.274 [2024-07-15 14:05:00.692111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.415 ms 00:23:36.274 [2024-07-15 14:05:00.692125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.274 [2024-07-15 14:05:00.704911] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:36.274 [2024-07-15 14:05:00.718905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.274 [2024-07-15 14:05:00.718974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:36.274 [2024-07-15 14:05:00.718999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.613 ms 00:23:36.274 [2024-07-15 14:05:00.719016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.274 [2024-07-15 14:05:00.719155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.274 [2024-07-15 14:05:00.719176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:36.274 [2024-07-15 14:05:00.719192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:36.274 [2024-07-15 14:05:00.719204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.274 [2024-07-15 14:05:00.719296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.274 [2024-07-15 14:05:00.719341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:36.274 [2024-07-15 14:05:00.719361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:23:36.274 [2024-07-15 14:05:00.719381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.274 [2024-07-15 14:05:00.719428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.274 [2024-07-15 14:05:00.719443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:36.274 [2024-07-15 14:05:00.719465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:36.274 [2024-07-15 14:05:00.719486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.274 [2024-07-15 14:05:00.719547] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:36.274 [2024-07-15 14:05:00.719566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.274 [2024-07-15 14:05:00.719586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:36.274 [2024-07-15 14:05:00.719608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:23:36.274 [2024-07-15 14:05:00.719625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.274 [2024-07-15 14:05:00.750814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.274 [2024-07-15 14:05:00.750878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:36.274 [2024-07-15 14:05:00.750898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.148 ms 00:23:36.274 [2024-07-15 14:05:00.750913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.274 [2024-07-15 14:05:00.751057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.274 [2024-07-15 14:05:00.751102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:36.274 [2024-07-15 14:05:00.751123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:23:36.274 [2024-07-15 14:05:00.751138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.274 [2024-07-15 14:05:00.752245] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:36.274 [2024-07-15 14:05:00.756442] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 346.894 ms, result 0 00:23:36.274 [2024-07-15 14:05:00.757689] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:36.274 Some configs were skipped because the RPC state that can call them passed over. 00:23:36.274 14:05:00 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:23:36.535 [2024-07-15 14:05:01.015685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.535 [2024-07-15 14:05:01.015754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:36.535 [2024-07-15 14:05:01.015781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.415 ms 00:23:36.535 [2024-07-15 14:05:01.015794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.535 [2024-07-15 14:05:01.015845] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.592 ms, result 0 00:23:36.535 true 00:23:36.535 14:05:01 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:23:36.792 [2024-07-15 14:05:01.303621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.792 [2024-07-15 14:05:01.303687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:36.792 [2024-07-15 14:05:01.303708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:23:36.792 [2024-07-15 14:05:01.303723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.792 [2024-07-15 14:05:01.303772] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.152 ms, result 0 00:23:36.792 true 00:23:36.792 14:05:01 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 81454 00:23:36.792 14:05:01 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 81454 ']' 00:23:36.792 14:05:01 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 81454 00:23:36.792 14:05:01 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:23:36.792 14:05:01 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:36.792 14:05:01 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81454 00:23:37.052 14:05:01 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:37.052 14:05:01 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:37.052 killing process with pid 81454 00:23:37.052 14:05:01 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81454' 00:23:37.052 14:05:01 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 81454 00:23:37.052 14:05:01 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 81454 00:23:37.988 [2024-07-15 14:05:02.297739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.988 [2024-07-15 14:05:02.297815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:37.988 [2024-07-15 14:05:02.297838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:37.988 [2024-07-15 14:05:02.297850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.988 [2024-07-15 14:05:02.297886] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:37.988 [2024-07-15 14:05:02.301181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.988 [2024-07-15 14:05:02.301222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:37.988 [2024-07-15 14:05:02.301237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.273 ms 00:23:37.988 [2024-07-15 14:05:02.301253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.988 [2024-07-15 14:05:02.301577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.988 [2024-07-15 14:05:02.301611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:37.988 [2024-07-15 14:05:02.301625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:23:37.988 [2024-07-15 14:05:02.301639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.988 [2024-07-15 14:05:02.305705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.988 [2024-07-15 14:05:02.305753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:37.988 [2024-07-15 14:05:02.305772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.041 ms 00:23:37.988 [2024-07-15 14:05:02.305786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.988 [2024-07-15 14:05:02.313356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.988 [2024-07-15 14:05:02.313398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:37.988 [2024-07-15 14:05:02.313413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.525 ms 00:23:37.988 [2024-07-15 14:05:02.313429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.988 [2024-07-15 14:05:02.325948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.988 [2024-07-15 14:05:02.325995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:37.988 [2024-07-15 14:05:02.326012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.459 ms 00:23:37.988 [2024-07-15 14:05:02.326028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.988 [2024-07-15 14:05:02.334441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.988 [2024-07-15 14:05:02.334489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:37.988 [2024-07-15 14:05:02.334510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.366 ms 00:23:37.988 [2024-07-15 14:05:02.334535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.988 [2024-07-15 14:05:02.334694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.988 [2024-07-15 14:05:02.334718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:37.988 [2024-07-15 14:05:02.334731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:23:37.988 [2024-07-15 14:05:02.334758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.988 [2024-07-15 14:05:02.348095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.988 [2024-07-15 14:05:02.348145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:37.988 [2024-07-15 14:05:02.348162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.310 ms 00:23:37.988 [2024-07-15 14:05:02.348177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.988 [2024-07-15 14:05:02.360783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.988 [2024-07-15 14:05:02.360836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:37.988 [2024-07-15 14:05:02.360855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.546 ms 00:23:37.988 [2024-07-15 14:05:02.360874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.988 [2024-07-15 14:05:02.373150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.988 [2024-07-15 14:05:02.373203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:37.988 [2024-07-15 14:05:02.373220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.227 ms 00:23:37.988 [2024-07-15 14:05:02.373234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.988 [2024-07-15 14:05:02.385503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.988 [2024-07-15 14:05:02.385550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:37.988 [2024-07-15 14:05:02.385567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.180 ms 00:23:37.988 [2024-07-15 14:05:02.385581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.988 [2024-07-15 14:05:02.385625] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:37.988 [2024-07-15 14:05:02.385653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.385669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.385683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.385696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.385710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.385722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.385739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.385751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.385766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.385778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.385792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.385805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.385818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.385830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.385844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.385856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.385873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.385885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.385899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.385911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.385925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.385937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.385953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.385965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.385979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.385991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.386005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.386017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.386031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.386044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.386058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.386071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.386085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.386097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.386110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.386122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.386136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.386149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.386165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:37.988 [2024-07-15 14:05:02.386177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.386990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.387003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:37.989 [2024-07-15 14:05:02.387026] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:37.989 [2024-07-15 14:05:02.387038] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 96438210-a730-46dd-94de-bf7d6eb48d99 00:23:37.989 [2024-07-15 14:05:02.387058] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:37.989 [2024-07-15 14:05:02.387070] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:37.989 [2024-07-15 14:05:02.387083] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:37.989 [2024-07-15 14:05:02.387095] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:37.989 [2024-07-15 14:05:02.387108] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:37.989 [2024-07-15 14:05:02.387119] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:37.989 [2024-07-15 14:05:02.387133] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:37.989 [2024-07-15 14:05:02.387143] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:37.989 [2024-07-15 14:05:02.387171] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:37.989 [2024-07-15 14:05:02.387183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.989 [2024-07-15 14:05:02.387197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:37.989 [2024-07-15 14:05:02.387211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.561 ms 00:23:37.989 [2024-07-15 14:05:02.387224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.989 [2024-07-15 14:05:02.403715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.989 [2024-07-15 14:05:02.403764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:37.989 [2024-07-15 14:05:02.403782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.448 ms 00:23:37.989 [2024-07-15 14:05:02.403799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.989 [2024-07-15 14:05:02.404267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.989 [2024-07-15 14:05:02.404299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:37.989 [2024-07-15 14:05:02.404338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:23:37.989 [2024-07-15 14:05:02.404355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.989 [2024-07-15 14:05:02.458929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.989 [2024-07-15 14:05:02.459000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:37.989 [2024-07-15 14:05:02.459018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.989 [2024-07-15 14:05:02.459033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.989 [2024-07-15 14:05:02.459169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.989 [2024-07-15 14:05:02.459190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:37.989 [2024-07-15 14:05:02.459203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.989 [2024-07-15 14:05:02.459221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.989 [2024-07-15 14:05:02.459292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.989 [2024-07-15 14:05:02.459335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:37.989 [2024-07-15 14:05:02.459350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.989 [2024-07-15 14:05:02.459366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.989 [2024-07-15 14:05:02.459392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.989 [2024-07-15 14:05:02.459409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:37.989 [2024-07-15 14:05:02.459421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.989 [2024-07-15 14:05:02.459434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.264 [2024-07-15 14:05:02.558606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.264 [2024-07-15 14:05:02.558684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:38.264 [2024-07-15 14:05:02.558703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.264 [2024-07-15 14:05:02.558718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.264 [2024-07-15 14:05:02.642568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.264 [2024-07-15 14:05:02.642648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:38.264 [2024-07-15 14:05:02.642676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.264 [2024-07-15 14:05:02.642693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.264 [2024-07-15 14:05:02.642802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.264 [2024-07-15 14:05:02.642825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:38.264 [2024-07-15 14:05:02.642838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.264 [2024-07-15 14:05:02.642854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.264 [2024-07-15 14:05:02.642891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.264 [2024-07-15 14:05:02.642907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:38.264 [2024-07-15 14:05:02.642919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.264 [2024-07-15 14:05:02.642932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.264 [2024-07-15 14:05:02.643060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.264 [2024-07-15 14:05:02.643081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:38.264 [2024-07-15 14:05:02.643094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.264 [2024-07-15 14:05:02.643107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.264 [2024-07-15 14:05:02.643163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.264 [2024-07-15 14:05:02.643185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:38.264 [2024-07-15 14:05:02.643198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.264 [2024-07-15 14:05:02.643211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.264 [2024-07-15 14:05:02.643259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.264 [2024-07-15 14:05:02.643281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:38.264 [2024-07-15 14:05:02.643293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.264 [2024-07-15 14:05:02.643334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.264 [2024-07-15 14:05:02.643395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.264 [2024-07-15 14:05:02.643415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:38.264 [2024-07-15 14:05:02.643429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.264 [2024-07-15 14:05:02.643442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.264 [2024-07-15 14:05:02.643601] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 345.845 ms, result 0 00:23:39.195 14:05:03 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:39.195 [2024-07-15 14:05:03.655522] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:23:39.195 [2024-07-15 14:05:03.655674] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81519 ] 00:23:39.453 [2024-07-15 14:05:03.817209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.711 [2024-07-15 14:05:04.001016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.970 [2024-07-15 14:05:04.306757] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:39.970 [2024-07-15 14:05:04.306835] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:39.970 [2024-07-15 14:05:04.467908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.970 [2024-07-15 14:05:04.467973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:39.970 [2024-07-15 14:05:04.467994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:39.970 [2024-07-15 14:05:04.468006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.970 [2024-07-15 14:05:04.471196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.970 [2024-07-15 14:05:04.471241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:39.970 [2024-07-15 14:05:04.471258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.160 ms 00:23:39.970 [2024-07-15 14:05:04.471270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.970 [2024-07-15 14:05:04.471522] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:39.970 [2024-07-15 14:05:04.472484] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:39.970 [2024-07-15 14:05:04.472521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.970 [2024-07-15 14:05:04.472535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:39.970 [2024-07-15 14:05:04.472547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.010 ms 00:23:39.970 [2024-07-15 14:05:04.472559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.970 [2024-07-15 14:05:04.473838] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:39.970 [2024-07-15 14:05:04.491846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.970 [2024-07-15 14:05:04.491935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:39.970 [2024-07-15 14:05:04.491976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.003 ms 00:23:39.970 [2024-07-15 14:05:04.491998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.970 [2024-07-15 14:05:04.492203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.970 [2024-07-15 14:05:04.492236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:39.970 [2024-07-15 14:05:04.492260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:23:39.970 [2024-07-15 14:05:04.492280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.970 [2024-07-15 14:05:04.497383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.970 [2024-07-15 14:05:04.497435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:39.970 [2024-07-15 14:05:04.497453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.957 ms 00:23:39.970 [2024-07-15 14:05:04.497465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.970 [2024-07-15 14:05:04.497620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.970 [2024-07-15 14:05:04.497641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:39.971 [2024-07-15 14:05:04.497655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:39.971 [2024-07-15 14:05:04.497665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.971 [2024-07-15 14:05:04.497709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.971 [2024-07-15 14:05:04.497725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:39.971 [2024-07-15 14:05:04.497741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:39.971 [2024-07-15 14:05:04.497753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.971 [2024-07-15 14:05:04.497785] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:39.971 [2024-07-15 14:05:04.502088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.971 [2024-07-15 14:05:04.502128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:39.971 [2024-07-15 14:05:04.502143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.313 ms 00:23:39.971 [2024-07-15 14:05:04.502154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.971 [2024-07-15 14:05:04.502252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.971 [2024-07-15 14:05:04.502271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:39.971 [2024-07-15 14:05:04.502284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:39.971 [2024-07-15 14:05:04.502295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.971 [2024-07-15 14:05:04.502371] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:39.971 [2024-07-15 14:05:04.502415] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:39.971 [2024-07-15 14:05:04.502464] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:39.971 [2024-07-15 14:05:04.502485] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:23:39.971 [2024-07-15 14:05:04.502590] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:39.971 [2024-07-15 14:05:04.502611] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:39.971 [2024-07-15 14:05:04.502626] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:39.971 [2024-07-15 14:05:04.502642] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:39.971 [2024-07-15 14:05:04.502655] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:39.971 [2024-07-15 14:05:04.502672] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:39.971 [2024-07-15 14:05:04.502683] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:39.971 [2024-07-15 14:05:04.502694] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:39.971 [2024-07-15 14:05:04.502704] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:39.971 [2024-07-15 14:05:04.502716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.971 [2024-07-15 14:05:04.502727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:39.971 [2024-07-15 14:05:04.502739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:23:39.971 [2024-07-15 14:05:04.502750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.971 [2024-07-15 14:05:04.502848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.971 [2024-07-15 14:05:04.502867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:39.971 [2024-07-15 14:05:04.502884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:39.971 [2024-07-15 14:05:04.502895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.971 [2024-07-15 14:05:04.503003] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:39.971 [2024-07-15 14:05:04.503024] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:39.971 [2024-07-15 14:05:04.503037] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:39.971 [2024-07-15 14:05:04.503048] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:39.971 [2024-07-15 14:05:04.503062] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:39.971 [2024-07-15 14:05:04.503073] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:39.971 [2024-07-15 14:05:04.503084] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:39.971 [2024-07-15 14:05:04.503094] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:39.971 [2024-07-15 14:05:04.503105] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:39.971 [2024-07-15 14:05:04.503115] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:39.971 [2024-07-15 14:05:04.503125] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:39.971 [2024-07-15 14:05:04.503135] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:39.971 [2024-07-15 14:05:04.503145] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:39.971 [2024-07-15 14:05:04.503155] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:39.971 [2024-07-15 14:05:04.503166] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:39.971 [2024-07-15 14:05:04.503175] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:39.971 [2024-07-15 14:05:04.503185] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:39.971 [2024-07-15 14:05:04.503196] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:39.971 [2024-07-15 14:05:04.503220] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:39.971 [2024-07-15 14:05:04.503231] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:39.971 [2024-07-15 14:05:04.503241] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:39.971 [2024-07-15 14:05:04.503251] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:39.971 [2024-07-15 14:05:04.503261] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:39.971 [2024-07-15 14:05:04.503271] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:39.971 [2024-07-15 14:05:04.503281] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:39.971 [2024-07-15 14:05:04.503291] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:39.971 [2024-07-15 14:05:04.503316] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:39.971 [2024-07-15 14:05:04.503329] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:39.971 [2024-07-15 14:05:04.503339] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:39.971 [2024-07-15 14:05:04.503350] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:39.971 [2024-07-15 14:05:04.503359] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:39.971 [2024-07-15 14:05:04.503369] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:39.971 [2024-07-15 14:05:04.503379] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:39.971 [2024-07-15 14:05:04.503389] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:39.971 [2024-07-15 14:05:04.503400] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:39.971 [2024-07-15 14:05:04.503410] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:39.971 [2024-07-15 14:05:04.503421] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:39.971 [2024-07-15 14:05:04.503431] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:39.971 [2024-07-15 14:05:04.503442] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:39.971 [2024-07-15 14:05:04.503452] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:39.971 [2024-07-15 14:05:04.503462] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:39.971 [2024-07-15 14:05:04.503472] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:39.971 [2024-07-15 14:05:04.503482] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:39.971 [2024-07-15 14:05:04.503492] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:39.971 [2024-07-15 14:05:04.503503] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:39.971 [2024-07-15 14:05:04.503514] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:39.971 [2024-07-15 14:05:04.503526] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:39.971 [2024-07-15 14:05:04.503542] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:39.971 [2024-07-15 14:05:04.503553] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:39.971 [2024-07-15 14:05:04.503563] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:39.971 [2024-07-15 14:05:04.503574] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:39.971 [2024-07-15 14:05:04.503583] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:39.971 [2024-07-15 14:05:04.503594] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:39.971 [2024-07-15 14:05:04.503605] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:39.971 [2024-07-15 14:05:04.503619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:39.971 [2024-07-15 14:05:04.503631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:39.971 [2024-07-15 14:05:04.503643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:39.971 [2024-07-15 14:05:04.503654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:39.971 [2024-07-15 14:05:04.503665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:39.971 [2024-07-15 14:05:04.503676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:39.971 [2024-07-15 14:05:04.503687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:39.971 [2024-07-15 14:05:04.503698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:39.971 [2024-07-15 14:05:04.503709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:39.971 [2024-07-15 14:05:04.503720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:39.971 [2024-07-15 14:05:04.503732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:39.971 [2024-07-15 14:05:04.503743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:39.971 [2024-07-15 14:05:04.503754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:39.971 [2024-07-15 14:05:04.503765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:39.971 [2024-07-15 14:05:04.503777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:39.971 [2024-07-15 14:05:04.503788] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:39.971 [2024-07-15 14:05:04.503800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:39.971 [2024-07-15 14:05:04.503812] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:39.972 [2024-07-15 14:05:04.503824] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:39.972 [2024-07-15 14:05:04.503835] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:39.972 [2024-07-15 14:05:04.503846] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:39.972 [2024-07-15 14:05:04.503858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.972 [2024-07-15 14:05:04.503870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:39.972 [2024-07-15 14:05:04.503881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.921 ms 00:23:39.972 [2024-07-15 14:05:04.503892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.230 [2024-07-15 14:05:04.542181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.230 [2024-07-15 14:05:04.542241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:40.230 [2024-07-15 14:05:04.542261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.217 ms 00:23:40.230 [2024-07-15 14:05:04.542278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.230 [2024-07-15 14:05:04.542515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.230 [2024-07-15 14:05:04.542549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:40.230 [2024-07-15 14:05:04.542577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:40.230 [2024-07-15 14:05:04.542588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.230 [2024-07-15 14:05:04.580778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.230 [2024-07-15 14:05:04.580844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:40.230 [2024-07-15 14:05:04.580873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.154 ms 00:23:40.230 [2024-07-15 14:05:04.580894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.230 [2024-07-15 14:05:04.581057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.230 [2024-07-15 14:05:04.581079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:40.230 [2024-07-15 14:05:04.581093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:40.230 [2024-07-15 14:05:04.581104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.230 [2024-07-15 14:05:04.581448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.230 [2024-07-15 14:05:04.581490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:40.230 [2024-07-15 14:05:04.581513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:23:40.230 [2024-07-15 14:05:04.581533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.230 [2024-07-15 14:05:04.581700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.230 [2024-07-15 14:05:04.581720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:40.230 [2024-07-15 14:05:04.581732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:23:40.230 [2024-07-15 14:05:04.581743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.230 [2024-07-15 14:05:04.597930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.230 [2024-07-15 14:05:04.597986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:40.230 [2024-07-15 14:05:04.598004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.157 ms 00:23:40.231 [2024-07-15 14:05:04.598016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.231 [2024-07-15 14:05:04.614367] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:40.231 [2024-07-15 14:05:04.614430] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:40.231 [2024-07-15 14:05:04.614450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.231 [2024-07-15 14:05:04.614462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:40.231 [2024-07-15 14:05:04.614476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.262 ms 00:23:40.231 [2024-07-15 14:05:04.614488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.231 [2024-07-15 14:05:04.644380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.231 [2024-07-15 14:05:04.644435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:40.231 [2024-07-15 14:05:04.644454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.777 ms 00:23:40.231 [2024-07-15 14:05:04.644466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.231 [2024-07-15 14:05:04.660350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.231 [2024-07-15 14:05:04.660397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:40.231 [2024-07-15 14:05:04.660414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.762 ms 00:23:40.231 [2024-07-15 14:05:04.660426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.231 [2024-07-15 14:05:04.676347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.231 [2024-07-15 14:05:04.676394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:40.231 [2024-07-15 14:05:04.676412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.817 ms 00:23:40.231 [2024-07-15 14:05:04.676423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.231 [2024-07-15 14:05:04.677228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.231 [2024-07-15 14:05:04.677260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:40.231 [2024-07-15 14:05:04.677274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.672 ms 00:23:40.231 [2024-07-15 14:05:04.677285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.231 [2024-07-15 14:05:04.749802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.231 [2024-07-15 14:05:04.749868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:40.231 [2024-07-15 14:05:04.749888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.470 ms 00:23:40.231 [2024-07-15 14:05:04.749900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.231 [2024-07-15 14:05:04.762708] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:40.231 [2024-07-15 14:05:04.776633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.231 [2024-07-15 14:05:04.776702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:40.231 [2024-07-15 14:05:04.776721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.581 ms 00:23:40.231 [2024-07-15 14:05:04.776734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.489 [2024-07-15 14:05:04.776876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.489 [2024-07-15 14:05:04.776899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:40.489 [2024-07-15 14:05:04.776913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:40.489 [2024-07-15 14:05:04.776925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.489 [2024-07-15 14:05:04.776991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.489 [2024-07-15 14:05:04.777007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:40.489 [2024-07-15 14:05:04.777019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:40.489 [2024-07-15 14:05:04.777031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.489 [2024-07-15 14:05:04.777063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.489 [2024-07-15 14:05:04.777077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:40.489 [2024-07-15 14:05:04.777094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:40.489 [2024-07-15 14:05:04.777105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.489 [2024-07-15 14:05:04.777141] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:40.490 [2024-07-15 14:05:04.777157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.490 [2024-07-15 14:05:04.777168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:40.490 [2024-07-15 14:05:04.777180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:40.490 [2024-07-15 14:05:04.777190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.490 [2024-07-15 14:05:04.808202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.490 [2024-07-15 14:05:04.808270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:40.490 [2024-07-15 14:05:04.808289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.981 ms 00:23:40.490 [2024-07-15 14:05:04.808315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.490 [2024-07-15 14:05:04.808461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.490 [2024-07-15 14:05:04.808482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:40.490 [2024-07-15 14:05:04.808495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:23:40.490 [2024-07-15 14:05:04.808506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.490 [2024-07-15 14:05:04.809513] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:40.490 [2024-07-15 14:05:04.813675] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 341.257 ms, result 0 00:23:40.490 [2024-07-15 14:05:04.814518] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:40.490 [2024-07-15 14:05:04.830924] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:50.708  Copying: 29/256 [MB] (29 MBps) Copying: 53/256 [MB] (24 MBps) Copying: 78/256 [MB] (24 MBps) Copying: 102/256 [MB] (24 MBps) Copying: 126/256 [MB] (23 MBps) Copying: 151/256 [MB] (25 MBps) Copying: 177/256 [MB] (25 MBps) Copying: 201/256 [MB] (24 MBps) Copying: 226/256 [MB] (25 MBps) Copying: 251/256 [MB] (24 MBps) Copying: 256/256 [MB] (average 25 MBps)[2024-07-15 14:05:15.247793] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:50.966 [2024-07-15 14:05:15.267098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.966 [2024-07-15 14:05:15.267173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:50.966 [2024-07-15 14:05:15.267198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:50.966 [2024-07-15 14:05:15.267212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.966 [2024-07-15 14:05:15.267254] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:50.966 [2024-07-15 14:05:15.271290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.966 [2024-07-15 14:05:15.271352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:50.966 [2024-07-15 14:05:15.271370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.010 ms 00:23:50.966 [2024-07-15 14:05:15.271383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.966 [2024-07-15 14:05:15.271731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.966 [2024-07-15 14:05:15.271760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:50.966 [2024-07-15 14:05:15.271777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:23:50.966 [2024-07-15 14:05:15.271790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.966 [2024-07-15 14:05:15.276814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.966 [2024-07-15 14:05:15.276852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:50.966 [2024-07-15 14:05:15.276876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.997 ms 00:23:50.966 [2024-07-15 14:05:15.276891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.966 [2024-07-15 14:05:15.286178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.966 [2024-07-15 14:05:15.286222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:50.966 [2024-07-15 14:05:15.286239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.256 ms 00:23:50.966 [2024-07-15 14:05:15.286258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.966 [2024-07-15 14:05:15.323955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.966 [2024-07-15 14:05:15.324012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:50.966 [2024-07-15 14:05:15.324032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.575 ms 00:23:50.966 [2024-07-15 14:05:15.324046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.966 [2024-07-15 14:05:15.344878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.966 [2024-07-15 14:05:15.344952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:50.966 [2024-07-15 14:05:15.344974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.748 ms 00:23:50.966 [2024-07-15 14:05:15.344998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.966 [2024-07-15 14:05:15.345207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.966 [2024-07-15 14:05:15.345230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:50.966 [2024-07-15 14:05:15.345246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:23:50.966 [2024-07-15 14:05:15.345259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.966 [2024-07-15 14:05:15.383575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.966 [2024-07-15 14:05:15.383643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:50.966 [2024-07-15 14:05:15.383665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.286 ms 00:23:50.966 [2024-07-15 14:05:15.383678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.966 [2024-07-15 14:05:15.421265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.966 [2024-07-15 14:05:15.421332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:50.966 [2024-07-15 14:05:15.421354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.496 ms 00:23:50.966 [2024-07-15 14:05:15.421368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.966 [2024-07-15 14:05:15.458631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.966 [2024-07-15 14:05:15.458685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:50.966 [2024-07-15 14:05:15.458706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.178 ms 00:23:50.966 [2024-07-15 14:05:15.458719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.966 [2024-07-15 14:05:15.495892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.966 [2024-07-15 14:05:15.495944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:50.966 [2024-07-15 14:05:15.495964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.053 ms 00:23:50.966 [2024-07-15 14:05:15.495977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.966 [2024-07-15 14:05:15.496057] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:50.966 [2024-07-15 14:05:15.496096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:50.966 [2024-07-15 14:05:15.496597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.496610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.496624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.496638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.496652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.496665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.496680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.496694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.496707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.496721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.496734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.496748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.496761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.496775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.496789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.496803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.496817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.496830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.496844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.496858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.496872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.496886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.496899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.496913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.496927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.496940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.496954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.496970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.496985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.496999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:50.967 [2024-07-15 14:05:15.497538] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:50.967 [2024-07-15 14:05:15.497552] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 96438210-a730-46dd-94de-bf7d6eb48d99 00:23:50.967 [2024-07-15 14:05:15.497566] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:50.967 [2024-07-15 14:05:15.497579] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:50.967 [2024-07-15 14:05:15.497608] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:50.967 [2024-07-15 14:05:15.497622] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:50.967 [2024-07-15 14:05:15.497634] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:50.967 [2024-07-15 14:05:15.497648] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:50.967 [2024-07-15 14:05:15.497661] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:50.967 [2024-07-15 14:05:15.497673] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:50.967 [2024-07-15 14:05:15.497684] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:50.967 [2024-07-15 14:05:15.497698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.967 [2024-07-15 14:05:15.497711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:50.967 [2024-07-15 14:05:15.497730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.643 ms 00:23:50.967 [2024-07-15 14:05:15.497743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.226 [2024-07-15 14:05:15.517746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.226 [2024-07-15 14:05:15.517807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:51.226 [2024-07-15 14:05:15.517827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.971 ms 00:23:51.226 [2024-07-15 14:05:15.517841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.226 [2024-07-15 14:05:15.518425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.226 [2024-07-15 14:05:15.518468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:51.226 [2024-07-15 14:05:15.518486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:23:51.226 [2024-07-15 14:05:15.518500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.226 [2024-07-15 14:05:15.568364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.226 [2024-07-15 14:05:15.568427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:51.226 [2024-07-15 14:05:15.568448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.226 [2024-07-15 14:05:15.568463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.226 [2024-07-15 14:05:15.568610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.226 [2024-07-15 14:05:15.568648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:51.226 [2024-07-15 14:05:15.568664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.226 [2024-07-15 14:05:15.568677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.226 [2024-07-15 14:05:15.568753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.226 [2024-07-15 14:05:15.568774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:51.226 [2024-07-15 14:05:15.568788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.226 [2024-07-15 14:05:15.568801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.226 [2024-07-15 14:05:15.568830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.226 [2024-07-15 14:05:15.568846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:51.226 [2024-07-15 14:05:15.568866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.226 [2024-07-15 14:05:15.568879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.226 [2024-07-15 14:05:15.671721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.226 [2024-07-15 14:05:15.671792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:51.226 [2024-07-15 14:05:15.671812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.226 [2024-07-15 14:05:15.671823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.226 [2024-07-15 14:05:15.755424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.226 [2024-07-15 14:05:15.755511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:51.226 [2024-07-15 14:05:15.755537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.226 [2024-07-15 14:05:15.755548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.226 [2024-07-15 14:05:15.755635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.226 [2024-07-15 14:05:15.755651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:51.226 [2024-07-15 14:05:15.755663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.226 [2024-07-15 14:05:15.755674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.226 [2024-07-15 14:05:15.755708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.226 [2024-07-15 14:05:15.755721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:51.226 [2024-07-15 14:05:15.755733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.226 [2024-07-15 14:05:15.755749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.226 [2024-07-15 14:05:15.755872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.226 [2024-07-15 14:05:15.755892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:51.226 [2024-07-15 14:05:15.755904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.226 [2024-07-15 14:05:15.755915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.226 [2024-07-15 14:05:15.755968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.226 [2024-07-15 14:05:15.755999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:51.226 [2024-07-15 14:05:15.756013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.226 [2024-07-15 14:05:15.756023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.226 [2024-07-15 14:05:15.756088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.226 [2024-07-15 14:05:15.756103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:51.226 [2024-07-15 14:05:15.756115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.226 [2024-07-15 14:05:15.756125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.226 [2024-07-15 14:05:15.756180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.226 [2024-07-15 14:05:15.756196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:51.226 [2024-07-15 14:05:15.756207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.226 [2024-07-15 14:05:15.756223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.226 [2024-07-15 14:05:15.756409] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 489.336 ms, result 0 00:23:52.601 00:23:52.601 00:23:52.601 14:05:16 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:52.860 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:23:52.860 14:05:17 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:23:52.860 14:05:17 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:23:52.860 14:05:17 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:52.860 14:05:17 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:52.860 14:05:17 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:23:53.119 14:05:17 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:23:53.119 14:05:17 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 81454 00:23:53.119 14:05:17 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 81454 ']' 00:23:53.119 14:05:17 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 81454 00:23:53.119 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (81454) - No such process 00:23:53.119 Process with pid 81454 is not found 00:23:53.119 14:05:17 ftl.ftl_trim -- common/autotest_common.sh@975 -- # echo 'Process with pid 81454 is not found' 00:23:53.119 00:23:53.119 real 1m7.501s 00:23:53.119 user 1m34.726s 00:23:53.119 sys 0m6.571s 00:23:53.119 14:05:17 ftl.ftl_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:53.119 ************************************ 00:23:53.119 END TEST ftl_trim 00:23:53.119 ************************************ 00:23:53.119 14:05:17 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:53.119 14:05:17 ftl -- common/autotest_common.sh@1142 -- # return 0 00:23:53.119 14:05:17 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:23:53.119 14:05:17 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:23:53.119 14:05:17 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:53.119 14:05:17 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:53.119 ************************************ 00:23:53.119 START TEST ftl_restore 00:23:53.119 ************************************ 00:23:53.119 14:05:17 ftl.ftl_restore -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:23:53.119 * Looking for test storage... 00:23:53.119 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.w5K4SXQXrJ 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=81715 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 81715 00:23:53.119 14:05:17 ftl.ftl_restore -- common/autotest_common.sh@829 -- # '[' -z 81715 ']' 00:23:53.119 14:05:17 ftl.ftl_restore -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.119 14:05:17 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.119 14:05:17 ftl.ftl_restore -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:53.119 14:05:17 ftl.ftl_restore -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.119 14:05:17 ftl.ftl_restore -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:53.119 14:05:17 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:53.378 [2024-07-15 14:05:17.742616] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:23:53.378 [2024-07-15 14:05:17.742818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81715 ] 00:23:53.378 [2024-07-15 14:05:17.913400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.636 [2024-07-15 14:05:18.115318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.569 14:05:18 ftl.ftl_restore -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:54.569 14:05:18 ftl.ftl_restore -- common/autotest_common.sh@862 -- # return 0 00:23:54.569 14:05:18 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:54.569 14:05:18 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:23:54.569 14:05:18 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:54.569 14:05:18 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:23:54.569 14:05:18 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:23:54.569 14:05:18 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:54.825 14:05:19 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:54.825 14:05:19 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:23:54.825 14:05:19 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:54.825 14:05:19 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:23:54.825 14:05:19 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:54.825 14:05:19 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:23:54.825 14:05:19 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:23:54.825 14:05:19 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:55.083 14:05:19 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:55.083 { 00:23:55.083 "name": "nvme0n1", 00:23:55.083 "aliases": [ 00:23:55.083 "1adaec49-f7d0-473d-8fd5-11cb22c2d332" 00:23:55.083 ], 00:23:55.083 "product_name": "NVMe disk", 00:23:55.083 "block_size": 4096, 00:23:55.083 "num_blocks": 1310720, 00:23:55.083 "uuid": "1adaec49-f7d0-473d-8fd5-11cb22c2d332", 00:23:55.083 "assigned_rate_limits": { 00:23:55.083 "rw_ios_per_sec": 0, 00:23:55.083 "rw_mbytes_per_sec": 0, 00:23:55.083 "r_mbytes_per_sec": 0, 00:23:55.083 "w_mbytes_per_sec": 0 00:23:55.083 }, 00:23:55.083 "claimed": true, 00:23:55.083 "claim_type": "read_many_write_one", 00:23:55.083 "zoned": false, 00:23:55.083 "supported_io_types": { 00:23:55.083 "read": true, 00:23:55.083 "write": true, 00:23:55.083 "unmap": true, 00:23:55.083 "flush": true, 00:23:55.083 "reset": true, 00:23:55.083 "nvme_admin": true, 00:23:55.083 "nvme_io": true, 00:23:55.083 "nvme_io_md": false, 00:23:55.083 "write_zeroes": true, 00:23:55.083 "zcopy": false, 00:23:55.083 "get_zone_info": false, 00:23:55.083 "zone_management": false, 00:23:55.083 "zone_append": false, 00:23:55.083 "compare": true, 00:23:55.083 "compare_and_write": false, 00:23:55.083 "abort": true, 00:23:55.083 "seek_hole": false, 00:23:55.083 "seek_data": false, 00:23:55.083 "copy": true, 00:23:55.083 "nvme_iov_md": false 00:23:55.083 }, 00:23:55.084 "driver_specific": { 00:23:55.084 "nvme": [ 00:23:55.084 { 00:23:55.084 "pci_address": "0000:00:11.0", 00:23:55.084 "trid": { 00:23:55.084 "trtype": "PCIe", 00:23:55.084 "traddr": "0000:00:11.0" 00:23:55.084 }, 00:23:55.084 "ctrlr_data": { 00:23:55.084 "cntlid": 0, 00:23:55.084 "vendor_id": "0x1b36", 00:23:55.084 "model_number": "QEMU NVMe Ctrl", 00:23:55.084 "serial_number": "12341", 00:23:55.084 "firmware_revision": "8.0.0", 00:23:55.084 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:55.084 "oacs": { 00:23:55.084 "security": 0, 00:23:55.084 "format": 1, 00:23:55.084 "firmware": 0, 00:23:55.084 "ns_manage": 1 00:23:55.084 }, 00:23:55.084 "multi_ctrlr": false, 00:23:55.084 "ana_reporting": false 00:23:55.084 }, 00:23:55.084 "vs": { 00:23:55.084 "nvme_version": "1.4" 00:23:55.084 }, 00:23:55.084 "ns_data": { 00:23:55.084 "id": 1, 00:23:55.084 "can_share": false 00:23:55.084 } 00:23:55.084 } 00:23:55.084 ], 00:23:55.084 "mp_policy": "active_passive" 00:23:55.084 } 00:23:55.084 } 00:23:55.084 ]' 00:23:55.084 14:05:19 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:55.084 14:05:19 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:23:55.084 14:05:19 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:55.084 14:05:19 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:23:55.084 14:05:19 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:23:55.084 14:05:19 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:23:55.084 14:05:19 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:23:55.084 14:05:19 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:55.084 14:05:19 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:23:55.084 14:05:19 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:55.084 14:05:19 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:55.341 14:05:19 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=71865862-dc60-489e-9157-e3790f17938f 00:23:55.341 14:05:19 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:23:55.341 14:05:19 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 71865862-dc60-489e-9157-e3790f17938f 00:23:55.599 14:05:20 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:55.862 14:05:20 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=deecc4d3-243f-411f-857b-4ad8d16326d1 00:23:55.862 14:05:20 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u deecc4d3-243f-411f-857b-4ad8d16326d1 00:23:56.127 14:05:20 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=0c94312c-9e34-4f2a-85e3-aa36d10e3c9b 00:23:56.127 14:05:20 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:23:56.127 14:05:20 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0c94312c-9e34-4f2a-85e3-aa36d10e3c9b 00:23:56.127 14:05:20 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:23:56.127 14:05:20 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:56.127 14:05:20 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=0c94312c-9e34-4f2a-85e3-aa36d10e3c9b 00:23:56.127 14:05:20 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:23:56.127 14:05:20 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 0c94312c-9e34-4f2a-85e3-aa36d10e3c9b 00:23:56.127 14:05:20 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=0c94312c-9e34-4f2a-85e3-aa36d10e3c9b 00:23:56.127 14:05:20 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:56.127 14:05:20 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:23:56.127 14:05:20 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:23:56.127 14:05:20 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0c94312c-9e34-4f2a-85e3-aa36d10e3c9b 00:23:56.385 14:05:20 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:56.385 { 00:23:56.385 "name": "0c94312c-9e34-4f2a-85e3-aa36d10e3c9b", 00:23:56.385 "aliases": [ 00:23:56.385 "lvs/nvme0n1p0" 00:23:56.385 ], 00:23:56.385 "product_name": "Logical Volume", 00:23:56.385 "block_size": 4096, 00:23:56.385 "num_blocks": 26476544, 00:23:56.385 "uuid": "0c94312c-9e34-4f2a-85e3-aa36d10e3c9b", 00:23:56.385 "assigned_rate_limits": { 00:23:56.385 "rw_ios_per_sec": 0, 00:23:56.385 "rw_mbytes_per_sec": 0, 00:23:56.385 "r_mbytes_per_sec": 0, 00:23:56.385 "w_mbytes_per_sec": 0 00:23:56.385 }, 00:23:56.385 "claimed": false, 00:23:56.385 "zoned": false, 00:23:56.385 "supported_io_types": { 00:23:56.385 "read": true, 00:23:56.385 "write": true, 00:23:56.385 "unmap": true, 00:23:56.385 "flush": false, 00:23:56.385 "reset": true, 00:23:56.385 "nvme_admin": false, 00:23:56.385 "nvme_io": false, 00:23:56.385 "nvme_io_md": false, 00:23:56.385 "write_zeroes": true, 00:23:56.385 "zcopy": false, 00:23:56.385 "get_zone_info": false, 00:23:56.385 "zone_management": false, 00:23:56.385 "zone_append": false, 00:23:56.385 "compare": false, 00:23:56.385 "compare_and_write": false, 00:23:56.385 "abort": false, 00:23:56.385 "seek_hole": true, 00:23:56.385 "seek_data": true, 00:23:56.385 "copy": false, 00:23:56.385 "nvme_iov_md": false 00:23:56.385 }, 00:23:56.385 "driver_specific": { 00:23:56.385 "lvol": { 00:23:56.385 "lvol_store_uuid": "deecc4d3-243f-411f-857b-4ad8d16326d1", 00:23:56.385 "base_bdev": "nvme0n1", 00:23:56.385 "thin_provision": true, 00:23:56.385 "num_allocated_clusters": 0, 00:23:56.385 "snapshot": false, 00:23:56.385 "clone": false, 00:23:56.385 "esnap_clone": false 00:23:56.385 } 00:23:56.385 } 00:23:56.385 } 00:23:56.385 ]' 00:23:56.385 14:05:20 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:56.385 14:05:20 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:23:56.385 14:05:20 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:56.385 14:05:20 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:56.385 14:05:20 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:56.385 14:05:20 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:23:56.385 14:05:20 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:23:56.385 14:05:20 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:23:56.385 14:05:20 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:56.951 14:05:21 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:56.951 14:05:21 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:56.951 14:05:21 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 0c94312c-9e34-4f2a-85e3-aa36d10e3c9b 00:23:56.951 14:05:21 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=0c94312c-9e34-4f2a-85e3-aa36d10e3c9b 00:23:56.951 14:05:21 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:56.951 14:05:21 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:23:56.951 14:05:21 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:23:56.951 14:05:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0c94312c-9e34-4f2a-85e3-aa36d10e3c9b 00:23:57.209 14:05:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:57.209 { 00:23:57.209 "name": "0c94312c-9e34-4f2a-85e3-aa36d10e3c9b", 00:23:57.209 "aliases": [ 00:23:57.209 "lvs/nvme0n1p0" 00:23:57.209 ], 00:23:57.209 "product_name": "Logical Volume", 00:23:57.209 "block_size": 4096, 00:23:57.209 "num_blocks": 26476544, 00:23:57.209 "uuid": "0c94312c-9e34-4f2a-85e3-aa36d10e3c9b", 00:23:57.209 "assigned_rate_limits": { 00:23:57.209 "rw_ios_per_sec": 0, 00:23:57.209 "rw_mbytes_per_sec": 0, 00:23:57.209 "r_mbytes_per_sec": 0, 00:23:57.209 "w_mbytes_per_sec": 0 00:23:57.209 }, 00:23:57.209 "claimed": false, 00:23:57.209 "zoned": false, 00:23:57.209 "supported_io_types": { 00:23:57.209 "read": true, 00:23:57.209 "write": true, 00:23:57.209 "unmap": true, 00:23:57.209 "flush": false, 00:23:57.209 "reset": true, 00:23:57.209 "nvme_admin": false, 00:23:57.209 "nvme_io": false, 00:23:57.209 "nvme_io_md": false, 00:23:57.209 "write_zeroes": true, 00:23:57.209 "zcopy": false, 00:23:57.209 "get_zone_info": false, 00:23:57.209 "zone_management": false, 00:23:57.209 "zone_append": false, 00:23:57.209 "compare": false, 00:23:57.209 "compare_and_write": false, 00:23:57.209 "abort": false, 00:23:57.209 "seek_hole": true, 00:23:57.209 "seek_data": true, 00:23:57.209 "copy": false, 00:23:57.209 "nvme_iov_md": false 00:23:57.209 }, 00:23:57.209 "driver_specific": { 00:23:57.209 "lvol": { 00:23:57.209 "lvol_store_uuid": "deecc4d3-243f-411f-857b-4ad8d16326d1", 00:23:57.209 "base_bdev": "nvme0n1", 00:23:57.209 "thin_provision": true, 00:23:57.209 "num_allocated_clusters": 0, 00:23:57.209 "snapshot": false, 00:23:57.209 "clone": false, 00:23:57.209 "esnap_clone": false 00:23:57.209 } 00:23:57.209 } 00:23:57.209 } 00:23:57.209 ]' 00:23:57.209 14:05:21 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:57.209 14:05:21 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:23:57.209 14:05:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:57.209 14:05:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:57.209 14:05:21 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:57.209 14:05:21 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:23:57.209 14:05:21 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:23:57.209 14:05:21 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:57.467 14:05:21 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:23:57.467 14:05:21 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 0c94312c-9e34-4f2a-85e3-aa36d10e3c9b 00:23:57.467 14:05:21 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=0c94312c-9e34-4f2a-85e3-aa36d10e3c9b 00:23:57.467 14:05:21 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:57.467 14:05:21 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:23:57.467 14:05:21 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:23:57.467 14:05:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0c94312c-9e34-4f2a-85e3-aa36d10e3c9b 00:23:57.725 14:05:22 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:57.725 { 00:23:57.726 "name": "0c94312c-9e34-4f2a-85e3-aa36d10e3c9b", 00:23:57.726 "aliases": [ 00:23:57.726 "lvs/nvme0n1p0" 00:23:57.726 ], 00:23:57.726 "product_name": "Logical Volume", 00:23:57.726 "block_size": 4096, 00:23:57.726 "num_blocks": 26476544, 00:23:57.726 "uuid": "0c94312c-9e34-4f2a-85e3-aa36d10e3c9b", 00:23:57.726 "assigned_rate_limits": { 00:23:57.726 "rw_ios_per_sec": 0, 00:23:57.726 "rw_mbytes_per_sec": 0, 00:23:57.726 "r_mbytes_per_sec": 0, 00:23:57.726 "w_mbytes_per_sec": 0 00:23:57.726 }, 00:23:57.726 "claimed": false, 00:23:57.726 "zoned": false, 00:23:57.726 "supported_io_types": { 00:23:57.726 "read": true, 00:23:57.726 "write": true, 00:23:57.726 "unmap": true, 00:23:57.726 "flush": false, 00:23:57.726 "reset": true, 00:23:57.726 "nvme_admin": false, 00:23:57.726 "nvme_io": false, 00:23:57.726 "nvme_io_md": false, 00:23:57.726 "write_zeroes": true, 00:23:57.726 "zcopy": false, 00:23:57.726 "get_zone_info": false, 00:23:57.726 "zone_management": false, 00:23:57.726 "zone_append": false, 00:23:57.726 "compare": false, 00:23:57.726 "compare_and_write": false, 00:23:57.726 "abort": false, 00:23:57.726 "seek_hole": true, 00:23:57.726 "seek_data": true, 00:23:57.726 "copy": false, 00:23:57.726 "nvme_iov_md": false 00:23:57.726 }, 00:23:57.726 "driver_specific": { 00:23:57.726 "lvol": { 00:23:57.726 "lvol_store_uuid": "deecc4d3-243f-411f-857b-4ad8d16326d1", 00:23:57.726 "base_bdev": "nvme0n1", 00:23:57.726 "thin_provision": true, 00:23:57.726 "num_allocated_clusters": 0, 00:23:57.726 "snapshot": false, 00:23:57.726 "clone": false, 00:23:57.726 "esnap_clone": false 00:23:57.726 } 00:23:57.726 } 00:23:57.726 } 00:23:57.726 ]' 00:23:57.726 14:05:22 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:57.726 14:05:22 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:23:57.726 14:05:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:57.984 14:05:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:57.984 14:05:22 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:57.984 14:05:22 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:23:57.984 14:05:22 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:23:57.984 14:05:22 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 0c94312c-9e34-4f2a-85e3-aa36d10e3c9b --l2p_dram_limit 10' 00:23:57.984 14:05:22 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:23:57.984 14:05:22 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:23:57.984 14:05:22 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:57.984 14:05:22 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:23:57.984 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:23:57.984 14:05:22 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0c94312c-9e34-4f2a-85e3-aa36d10e3c9b --l2p_dram_limit 10 -c nvc0n1p0 00:23:57.984 [2024-07-15 14:05:22.487482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.984 [2024-07-15 14:05:22.487554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:57.984 [2024-07-15 14:05:22.487578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:57.984 [2024-07-15 14:05:22.487594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.984 [2024-07-15 14:05:22.487680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.984 [2024-07-15 14:05:22.487703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:57.984 [2024-07-15 14:05:22.487718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:23:57.984 [2024-07-15 14:05:22.487732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.984 [2024-07-15 14:05:22.487763] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:57.984 [2024-07-15 14:05:22.488745] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:57.984 [2024-07-15 14:05:22.488789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.984 [2024-07-15 14:05:22.488811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:57.984 [2024-07-15 14:05:22.488825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.033 ms 00:23:57.984 [2024-07-15 14:05:22.488839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.984 [2024-07-15 14:05:22.489033] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 48aaaeb7-ff59-47dc-b2d2-8bf1c9ea6f7e 00:23:57.984 [2024-07-15 14:05:22.490092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.984 [2024-07-15 14:05:22.490136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:57.984 [2024-07-15 14:05:22.490157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:23:57.984 [2024-07-15 14:05:22.490170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.984 [2024-07-15 14:05:22.494941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.984 [2024-07-15 14:05:22.494999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:57.984 [2024-07-15 14:05:22.495026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.704 ms 00:23:57.984 [2024-07-15 14:05:22.495039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.984 [2024-07-15 14:05:22.495182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.984 [2024-07-15 14:05:22.495205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:57.984 [2024-07-15 14:05:22.495221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:23:57.984 [2024-07-15 14:05:22.495234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.984 [2024-07-15 14:05:22.495362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.984 [2024-07-15 14:05:22.495384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:57.984 [2024-07-15 14:05:22.495400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:23:57.984 [2024-07-15 14:05:22.495425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.984 [2024-07-15 14:05:22.495464] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:57.984 [2024-07-15 14:05:22.500022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.984 [2024-07-15 14:05:22.500069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:57.984 [2024-07-15 14:05:22.500086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.571 ms 00:23:57.984 [2024-07-15 14:05:22.500102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.984 [2024-07-15 14:05:22.500273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.984 [2024-07-15 14:05:22.500294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:57.984 [2024-07-15 14:05:22.500330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:57.984 [2024-07-15 14:05:22.500347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.984 [2024-07-15 14:05:22.500406] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:57.984 [2024-07-15 14:05:22.500574] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:57.984 [2024-07-15 14:05:22.500594] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:57.984 [2024-07-15 14:05:22.500615] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:57.984 [2024-07-15 14:05:22.500632] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:57.984 [2024-07-15 14:05:22.500648] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:57.984 [2024-07-15 14:05:22.500662] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:57.984 [2024-07-15 14:05:22.500676] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:57.984 [2024-07-15 14:05:22.500690] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:57.984 [2024-07-15 14:05:22.500705] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:57.984 [2024-07-15 14:05:22.500719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.984 [2024-07-15 14:05:22.500733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:57.984 [2024-07-15 14:05:22.500746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:23:57.984 [2024-07-15 14:05:22.500760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.984 [2024-07-15 14:05:22.500855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.984 [2024-07-15 14:05:22.500873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:57.984 [2024-07-15 14:05:22.500887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:57.984 [2024-07-15 14:05:22.500901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.984 [2024-07-15 14:05:22.501015] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:57.984 [2024-07-15 14:05:22.501037] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:57.984 [2024-07-15 14:05:22.501062] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:57.984 [2024-07-15 14:05:22.501078] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:57.984 [2024-07-15 14:05:22.501091] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:57.984 [2024-07-15 14:05:22.501104] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:57.984 [2024-07-15 14:05:22.501116] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:57.984 [2024-07-15 14:05:22.501130] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:57.984 [2024-07-15 14:05:22.501142] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:57.984 [2024-07-15 14:05:22.501164] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:57.984 [2024-07-15 14:05:22.501176] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:57.984 [2024-07-15 14:05:22.501189] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:57.984 [2024-07-15 14:05:22.501200] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:57.984 [2024-07-15 14:05:22.501215] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:57.984 [2024-07-15 14:05:22.501227] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:57.984 [2024-07-15 14:05:22.501241] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:57.984 [2024-07-15 14:05:22.501252] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:57.984 [2024-07-15 14:05:22.501269] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:57.984 [2024-07-15 14:05:22.501281] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:57.984 [2024-07-15 14:05:22.501295] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:57.984 [2024-07-15 14:05:22.501323] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:57.984 [2024-07-15 14:05:22.501339] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:57.984 [2024-07-15 14:05:22.501351] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:57.984 [2024-07-15 14:05:22.501364] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:57.984 [2024-07-15 14:05:22.501375] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:57.984 [2024-07-15 14:05:22.501389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:57.984 [2024-07-15 14:05:22.501400] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:57.984 [2024-07-15 14:05:22.501413] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:57.984 [2024-07-15 14:05:22.501424] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:57.984 [2024-07-15 14:05:22.501438] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:57.984 [2024-07-15 14:05:22.501449] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:57.984 [2024-07-15 14:05:22.501462] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:57.984 [2024-07-15 14:05:22.501473] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:57.984 [2024-07-15 14:05:22.501488] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:57.984 [2024-07-15 14:05:22.501500] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:57.984 [2024-07-15 14:05:22.501514] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:57.984 [2024-07-15 14:05:22.501525] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:57.984 [2024-07-15 14:05:22.501538] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:57.984 [2024-07-15 14:05:22.501550] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:57.984 [2024-07-15 14:05:22.501565] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:57.984 [2024-07-15 14:05:22.501576] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:57.984 [2024-07-15 14:05:22.501589] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:57.984 [2024-07-15 14:05:22.501601] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:57.984 [2024-07-15 14:05:22.501614] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:57.984 [2024-07-15 14:05:22.501626] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:57.984 [2024-07-15 14:05:22.501640] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:57.984 [2024-07-15 14:05:22.501652] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:57.984 [2024-07-15 14:05:22.501666] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:57.984 [2024-07-15 14:05:22.501679] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:57.984 [2024-07-15 14:05:22.501696] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:57.984 [2024-07-15 14:05:22.501708] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:57.984 [2024-07-15 14:05:22.501721] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:57.984 [2024-07-15 14:05:22.501733] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:57.984 [2024-07-15 14:05:22.501750] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:57.984 [2024-07-15 14:05:22.501765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:57.984 [2024-07-15 14:05:22.501784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:57.984 [2024-07-15 14:05:22.501797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:57.984 [2024-07-15 14:05:22.501811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:57.984 [2024-07-15 14:05:22.501823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:57.984 [2024-07-15 14:05:22.501838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:57.984 [2024-07-15 14:05:22.501850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:57.984 [2024-07-15 14:05:22.501864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:57.984 [2024-07-15 14:05:22.501876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:57.984 [2024-07-15 14:05:22.501892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:57.984 [2024-07-15 14:05:22.501904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:57.984 [2024-07-15 14:05:22.501919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:57.984 [2024-07-15 14:05:22.501932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:57.984 [2024-07-15 14:05:22.501946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:57.984 [2024-07-15 14:05:22.501958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:57.985 [2024-07-15 14:05:22.501972] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:57.985 [2024-07-15 14:05:22.501986] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:57.985 [2024-07-15 14:05:22.502002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:57.985 [2024-07-15 14:05:22.502015] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:57.985 [2024-07-15 14:05:22.502029] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:57.985 [2024-07-15 14:05:22.502041] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:57.985 [2024-07-15 14:05:22.502057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.985 [2024-07-15 14:05:22.502070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:57.985 [2024-07-15 14:05:22.502085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.107 ms 00:23:57.985 [2024-07-15 14:05:22.502097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.985 [2024-07-15 14:05:22.502156] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:57.985 [2024-07-15 14:05:22.502173] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:59.901 [2024-07-15 14:05:24.384357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.901 [2024-07-15 14:05:24.384440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:59.901 [2024-07-15 14:05:24.384486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1882.202 ms 00:23:59.901 [2024-07-15 14:05:24.384509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.901 [2024-07-15 14:05:24.418185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.901 [2024-07-15 14:05:24.418244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:59.901 [2024-07-15 14:05:24.418269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.292 ms 00:23:59.901 [2024-07-15 14:05:24.418283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.901 [2024-07-15 14:05:24.418509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.901 [2024-07-15 14:05:24.418533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:59.901 [2024-07-15 14:05:24.418550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:59.901 [2024-07-15 14:05:24.418566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.160 [2024-07-15 14:05:24.457430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.160 [2024-07-15 14:05:24.457489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:00.160 [2024-07-15 14:05:24.457513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.801 ms 00:24:00.160 [2024-07-15 14:05:24.457526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.160 [2024-07-15 14:05:24.457601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.160 [2024-07-15 14:05:24.457626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:00.160 [2024-07-15 14:05:24.457643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:00.160 [2024-07-15 14:05:24.457656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.160 [2024-07-15 14:05:24.458036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.160 [2024-07-15 14:05:24.458057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:00.160 [2024-07-15 14:05:24.458073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:24:00.160 [2024-07-15 14:05:24.458085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.160 [2024-07-15 14:05:24.458247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.160 [2024-07-15 14:05:24.458266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:00.160 [2024-07-15 14:05:24.458285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:24:00.160 [2024-07-15 14:05:24.458297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.160 [2024-07-15 14:05:24.475665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.160 [2024-07-15 14:05:24.475722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:00.160 [2024-07-15 14:05:24.475746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.308 ms 00:24:00.160 [2024-07-15 14:05:24.475759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.160 [2024-07-15 14:05:24.489359] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:00.160 [2024-07-15 14:05:24.492064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.160 [2024-07-15 14:05:24.492109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:00.160 [2024-07-15 14:05:24.492128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.177 ms 00:24:00.160 [2024-07-15 14:05:24.492144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.160 [2024-07-15 14:05:24.566062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.160 [2024-07-15 14:05:24.566146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:00.160 [2024-07-15 14:05:24.566171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.867 ms 00:24:00.160 [2024-07-15 14:05:24.566186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.160 [2024-07-15 14:05:24.566475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.160 [2024-07-15 14:05:24.566507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:00.160 [2024-07-15 14:05:24.566522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:24:00.160 [2024-07-15 14:05:24.566551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.160 [2024-07-15 14:05:24.598246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.160 [2024-07-15 14:05:24.598357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:00.160 [2024-07-15 14:05:24.598384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.593 ms 00:24:00.160 [2024-07-15 14:05:24.598402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.160 [2024-07-15 14:05:24.630818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.160 [2024-07-15 14:05:24.630895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:00.160 [2024-07-15 14:05:24.630918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.352 ms 00:24:00.160 [2024-07-15 14:05:24.630933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.160 [2024-07-15 14:05:24.631743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.160 [2024-07-15 14:05:24.631777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:00.160 [2024-07-15 14:05:24.631793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.732 ms 00:24:00.160 [2024-07-15 14:05:24.631812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.418 [2024-07-15 14:05:24.719253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.419 [2024-07-15 14:05:24.719337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:00.419 [2024-07-15 14:05:24.719362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.353 ms 00:24:00.419 [2024-07-15 14:05:24.719382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.419 [2024-07-15 14:05:24.752333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.419 [2024-07-15 14:05:24.752408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:00.419 [2024-07-15 14:05:24.752432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.879 ms 00:24:00.419 [2024-07-15 14:05:24.752448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.419 [2024-07-15 14:05:24.784265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.419 [2024-07-15 14:05:24.784354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:00.419 [2024-07-15 14:05:24.784376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.752 ms 00:24:00.419 [2024-07-15 14:05:24.784391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.419 [2024-07-15 14:05:24.816486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.419 [2024-07-15 14:05:24.816554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:00.419 [2024-07-15 14:05:24.816575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.032 ms 00:24:00.419 [2024-07-15 14:05:24.816591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.419 [2024-07-15 14:05:24.816668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.419 [2024-07-15 14:05:24.816693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:00.419 [2024-07-15 14:05:24.816708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:24:00.419 [2024-07-15 14:05:24.816726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.419 [2024-07-15 14:05:24.816849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.419 [2024-07-15 14:05:24.816874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:00.419 [2024-07-15 14:05:24.816892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:00.419 [2024-07-15 14:05:24.816906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.419 [2024-07-15 14:05:24.818139] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2330.118 ms, result 0 00:24:00.419 { 00:24:00.419 "name": "ftl0", 00:24:00.419 "uuid": "48aaaeb7-ff59-47dc-b2d2-8bf1c9ea6f7e" 00:24:00.419 } 00:24:00.419 14:05:24 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:24:00.419 14:05:24 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:00.677 14:05:25 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:24:00.677 14:05:25 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:00.936 [2024-07-15 14:05:25.401820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.936 [2024-07-15 14:05:25.401906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:00.936 [2024-07-15 14:05:25.401935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:00.936 [2024-07-15 14:05:25.401949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.936 [2024-07-15 14:05:25.401995] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:00.936 [2024-07-15 14:05:25.405432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.936 [2024-07-15 14:05:25.405477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:00.936 [2024-07-15 14:05:25.405495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.408 ms 00:24:00.936 [2024-07-15 14:05:25.405511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.936 [2024-07-15 14:05:25.405827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.936 [2024-07-15 14:05:25.405864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:00.936 [2024-07-15 14:05:25.405901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:24:00.936 [2024-07-15 14:05:25.405928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.936 [2024-07-15 14:05:25.409273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.936 [2024-07-15 14:05:25.409335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:00.936 [2024-07-15 14:05:25.409354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.309 ms 00:24:00.936 [2024-07-15 14:05:25.409369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.936 [2024-07-15 14:05:25.416195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.936 [2024-07-15 14:05:25.416242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:00.936 [2024-07-15 14:05:25.416262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.797 ms 00:24:00.936 [2024-07-15 14:05:25.416277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.936 [2024-07-15 14:05:25.449228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.936 [2024-07-15 14:05:25.449289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:00.936 [2024-07-15 14:05:25.449327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.839 ms 00:24:00.936 [2024-07-15 14:05:25.449345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.936 [2024-07-15 14:05:25.467973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.936 [2024-07-15 14:05:25.468037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:00.936 [2024-07-15 14:05:25.468058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.569 ms 00:24:00.936 [2024-07-15 14:05:25.468074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.936 [2024-07-15 14:05:25.468273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.936 [2024-07-15 14:05:25.468300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:00.936 [2024-07-15 14:05:25.468347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:24:00.936 [2024-07-15 14:05:25.468363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.195 [2024-07-15 14:05:25.499901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.195 [2024-07-15 14:05:25.499967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:01.195 [2024-07-15 14:05:25.499989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.508 ms 00:24:01.195 [2024-07-15 14:05:25.500004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.195 [2024-07-15 14:05:25.531938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.195 [2024-07-15 14:05:25.532040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:01.195 [2024-07-15 14:05:25.532063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.874 ms 00:24:01.196 [2024-07-15 14:05:25.532079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.196 [2024-07-15 14:05:25.564174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.196 [2024-07-15 14:05:25.564264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:01.196 [2024-07-15 14:05:25.564287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.006 ms 00:24:01.196 [2024-07-15 14:05:25.564323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.196 [2024-07-15 14:05:25.599140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.196 [2024-07-15 14:05:25.599228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:01.196 [2024-07-15 14:05:25.599251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.632 ms 00:24:01.196 [2024-07-15 14:05:25.599267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.196 [2024-07-15 14:05:25.599372] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:01.196 [2024-07-15 14:05:25.599407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.599990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:01.196 [2024-07-15 14:05:25.600699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:01.197 [2024-07-15 14:05:25.600712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:01.197 [2024-07-15 14:05:25.600729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:01.197 [2024-07-15 14:05:25.600743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:01.197 [2024-07-15 14:05:25.600758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:01.197 [2024-07-15 14:05:25.600771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:01.197 [2024-07-15 14:05:25.600786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:01.197 [2024-07-15 14:05:25.600799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:01.197 [2024-07-15 14:05:25.600814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:01.197 [2024-07-15 14:05:25.600827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:01.197 [2024-07-15 14:05:25.600841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:01.197 [2024-07-15 14:05:25.600855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:01.197 [2024-07-15 14:05:25.600871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:01.197 [2024-07-15 14:05:25.600885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:01.197 [2024-07-15 14:05:25.600910] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:01.197 [2024-07-15 14:05:25.600926] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 48aaaeb7-ff59-47dc-b2d2-8bf1c9ea6f7e 00:24:01.197 [2024-07-15 14:05:25.600941] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:01.197 [2024-07-15 14:05:25.600954] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:01.197 [2024-07-15 14:05:25.600972] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:01.197 [2024-07-15 14:05:25.600985] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:01.197 [2024-07-15 14:05:25.601000] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:01.197 [2024-07-15 14:05:25.601022] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:01.197 [2024-07-15 14:05:25.601036] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:01.197 [2024-07-15 14:05:25.601048] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:01.197 [2024-07-15 14:05:25.601061] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:01.197 [2024-07-15 14:05:25.601074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.197 [2024-07-15 14:05:25.601088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:01.197 [2024-07-15 14:05:25.601102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.705 ms 00:24:01.197 [2024-07-15 14:05:25.601116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.197 [2024-07-15 14:05:25.620591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.197 [2024-07-15 14:05:25.620681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:01.197 [2024-07-15 14:05:25.620705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.372 ms 00:24:01.197 [2024-07-15 14:05:25.620720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.197 [2024-07-15 14:05:25.621193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.197 [2024-07-15 14:05:25.621225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:01.197 [2024-07-15 14:05:25.621241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:24:01.197 [2024-07-15 14:05:25.621260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.197 [2024-07-15 14:05:25.673349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.197 [2024-07-15 14:05:25.673433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:01.197 [2024-07-15 14:05:25.673454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.197 [2024-07-15 14:05:25.673470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.197 [2024-07-15 14:05:25.673562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.197 [2024-07-15 14:05:25.673582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:01.197 [2024-07-15 14:05:25.673596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.197 [2024-07-15 14:05:25.673614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.197 [2024-07-15 14:05:25.673738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.197 [2024-07-15 14:05:25.673764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:01.197 [2024-07-15 14:05:25.673778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.197 [2024-07-15 14:05:25.673793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.197 [2024-07-15 14:05:25.673821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.197 [2024-07-15 14:05:25.673842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:01.197 [2024-07-15 14:05:25.673856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.197 [2024-07-15 14:05:25.673870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.456 [2024-07-15 14:05:25.772943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.456 [2024-07-15 14:05:25.773007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:01.456 [2024-07-15 14:05:25.773027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.456 [2024-07-15 14:05:25.773042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.456 [2024-07-15 14:05:25.856852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.456 [2024-07-15 14:05:25.856921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:01.456 [2024-07-15 14:05:25.856942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.456 [2024-07-15 14:05:25.856961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.456 [2024-07-15 14:05:25.857074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.456 [2024-07-15 14:05:25.857098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:01.456 [2024-07-15 14:05:25.857112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.456 [2024-07-15 14:05:25.857128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.456 [2024-07-15 14:05:25.857192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.456 [2024-07-15 14:05:25.857217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:01.456 [2024-07-15 14:05:25.857231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.456 [2024-07-15 14:05:25.857245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.456 [2024-07-15 14:05:25.857404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.456 [2024-07-15 14:05:25.857428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:01.456 [2024-07-15 14:05:25.857442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.456 [2024-07-15 14:05:25.857456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.456 [2024-07-15 14:05:25.857517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.456 [2024-07-15 14:05:25.857541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:01.456 [2024-07-15 14:05:25.857555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.456 [2024-07-15 14:05:25.857570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.456 [2024-07-15 14:05:25.857624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.456 [2024-07-15 14:05:25.857650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:01.456 [2024-07-15 14:05:25.857664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.456 [2024-07-15 14:05:25.857678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.456 [2024-07-15 14:05:25.857736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.456 [2024-07-15 14:05:25.857762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:01.456 [2024-07-15 14:05:25.857776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.456 [2024-07-15 14:05:25.857790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.456 [2024-07-15 14:05:25.857951] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 456.126 ms, result 0 00:24:01.456 true 00:24:01.456 14:05:25 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 81715 00:24:01.456 14:05:25 ftl.ftl_restore -- common/autotest_common.sh@948 -- # '[' -z 81715 ']' 00:24:01.456 14:05:25 ftl.ftl_restore -- common/autotest_common.sh@952 -- # kill -0 81715 00:24:01.456 14:05:25 ftl.ftl_restore -- common/autotest_common.sh@953 -- # uname 00:24:01.456 14:05:25 ftl.ftl_restore -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:01.456 14:05:25 ftl.ftl_restore -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81715 00:24:01.456 killing process with pid 81715 00:24:01.456 14:05:25 ftl.ftl_restore -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:01.456 14:05:25 ftl.ftl_restore -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:01.456 14:05:25 ftl.ftl_restore -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81715' 00:24:01.456 14:05:25 ftl.ftl_restore -- common/autotest_common.sh@967 -- # kill 81715 00:24:01.456 14:05:25 ftl.ftl_restore -- common/autotest_common.sh@972 -- # wait 81715 00:24:06.721 14:05:30 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:24:10.908 262144+0 records in 00:24:10.908 262144+0 records out 00:24:10.908 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.73634 s, 227 MB/s 00:24:10.908 14:05:35 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:13.440 14:05:37 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:13.440 [2024-07-15 14:05:37.696566] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:24:13.440 [2024-07-15 14:05:37.696888] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81946 ] 00:24:13.440 [2024-07-15 14:05:37.862408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.698 [2024-07-15 14:05:38.087855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.956 [2024-07-15 14:05:38.402564] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:13.956 [2024-07-15 14:05:38.402640] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:14.215 [2024-07-15 14:05:38.562384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.215 [2024-07-15 14:05:38.562447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:14.215 [2024-07-15 14:05:38.562468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:14.215 [2024-07-15 14:05:38.562480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.215 [2024-07-15 14:05:38.562557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.215 [2024-07-15 14:05:38.562579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:14.215 [2024-07-15 14:05:38.562592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:24:14.215 [2024-07-15 14:05:38.562607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.215 [2024-07-15 14:05:38.562638] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:14.215 [2024-07-15 14:05:38.563575] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:14.215 [2024-07-15 14:05:38.563608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.215 [2024-07-15 14:05:38.563626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:14.215 [2024-07-15 14:05:38.563639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:24:14.215 [2024-07-15 14:05:38.563650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.215 [2024-07-15 14:05:38.564794] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:14.215 [2024-07-15 14:05:38.581108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.215 [2024-07-15 14:05:38.581155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:14.215 [2024-07-15 14:05:38.581174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.315 ms 00:24:14.215 [2024-07-15 14:05:38.581185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.215 [2024-07-15 14:05:38.581262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.215 [2024-07-15 14:05:38.581283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:14.215 [2024-07-15 14:05:38.581328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:14.215 [2024-07-15 14:05:38.581344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.215 [2024-07-15 14:05:38.585772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.215 [2024-07-15 14:05:38.585818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:14.215 [2024-07-15 14:05:38.585834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.333 ms 00:24:14.215 [2024-07-15 14:05:38.585845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.215 [2024-07-15 14:05:38.585944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.215 [2024-07-15 14:05:38.585967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:14.215 [2024-07-15 14:05:38.585980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:14.215 [2024-07-15 14:05:38.585991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.215 [2024-07-15 14:05:38.586056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.215 [2024-07-15 14:05:38.586073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:14.215 [2024-07-15 14:05:38.586085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:14.215 [2024-07-15 14:05:38.586096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.215 [2024-07-15 14:05:38.586130] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:14.215 [2024-07-15 14:05:38.590428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.215 [2024-07-15 14:05:38.590468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:14.215 [2024-07-15 14:05:38.590483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.307 ms 00:24:14.216 [2024-07-15 14:05:38.590494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.216 [2024-07-15 14:05:38.590542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.216 [2024-07-15 14:05:38.590558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:14.216 [2024-07-15 14:05:38.590570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:14.216 [2024-07-15 14:05:38.590581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.216 [2024-07-15 14:05:38.590628] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:14.216 [2024-07-15 14:05:38.590660] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:14.216 [2024-07-15 14:05:38.590703] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:14.216 [2024-07-15 14:05:38.590726] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:24:14.216 [2024-07-15 14:05:38.590833] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:14.216 [2024-07-15 14:05:38.590848] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:14.216 [2024-07-15 14:05:38.590863] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:14.216 [2024-07-15 14:05:38.590877] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:14.216 [2024-07-15 14:05:38.590891] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:14.216 [2024-07-15 14:05:38.590903] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:14.216 [2024-07-15 14:05:38.590914] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:14.216 [2024-07-15 14:05:38.590925] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:14.216 [2024-07-15 14:05:38.590935] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:14.216 [2024-07-15 14:05:38.590947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.216 [2024-07-15 14:05:38.590962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:14.216 [2024-07-15 14:05:38.590975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:24:14.216 [2024-07-15 14:05:38.590985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.216 [2024-07-15 14:05:38.591074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.216 [2024-07-15 14:05:38.591088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:14.216 [2024-07-15 14:05:38.591100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:24:14.216 [2024-07-15 14:05:38.591111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.216 [2024-07-15 14:05:38.591218] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:14.216 [2024-07-15 14:05:38.591235] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:14.216 [2024-07-15 14:05:38.591252] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:14.216 [2024-07-15 14:05:38.591264] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.216 [2024-07-15 14:05:38.591276] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:14.216 [2024-07-15 14:05:38.591287] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:14.216 [2024-07-15 14:05:38.591297] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:14.216 [2024-07-15 14:05:38.591333] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:14.216 [2024-07-15 14:05:38.591345] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:14.216 [2024-07-15 14:05:38.591356] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:14.216 [2024-07-15 14:05:38.591367] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:14.216 [2024-07-15 14:05:38.591378] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:14.216 [2024-07-15 14:05:38.591388] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:14.216 [2024-07-15 14:05:38.591398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:14.216 [2024-07-15 14:05:38.591410] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:14.216 [2024-07-15 14:05:38.591420] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.216 [2024-07-15 14:05:38.591432] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:14.216 [2024-07-15 14:05:38.591447] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:14.216 [2024-07-15 14:05:38.591457] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.216 [2024-07-15 14:05:38.591467] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:14.216 [2024-07-15 14:05:38.591491] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:14.216 [2024-07-15 14:05:38.591502] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:14.216 [2024-07-15 14:05:38.591513] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:14.216 [2024-07-15 14:05:38.591523] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:14.216 [2024-07-15 14:05:38.591533] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:14.216 [2024-07-15 14:05:38.591543] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:14.216 [2024-07-15 14:05:38.591554] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:14.216 [2024-07-15 14:05:38.591563] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:14.216 [2024-07-15 14:05:38.591573] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:14.216 [2024-07-15 14:05:38.591584] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:14.216 [2024-07-15 14:05:38.591594] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:14.216 [2024-07-15 14:05:38.591604] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:14.216 [2024-07-15 14:05:38.591615] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:14.216 [2024-07-15 14:05:38.591625] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:14.216 [2024-07-15 14:05:38.591635] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:14.216 [2024-07-15 14:05:38.591646] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:14.216 [2024-07-15 14:05:38.591656] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:14.216 [2024-07-15 14:05:38.591666] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:14.216 [2024-07-15 14:05:38.591676] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:14.216 [2024-07-15 14:05:38.591686] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.216 [2024-07-15 14:05:38.591696] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:14.216 [2024-07-15 14:05:38.591706] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:14.216 [2024-07-15 14:05:38.591716] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.216 [2024-07-15 14:05:38.591726] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:14.216 [2024-07-15 14:05:38.591737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:14.216 [2024-07-15 14:05:38.591748] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:14.216 [2024-07-15 14:05:38.591759] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.216 [2024-07-15 14:05:38.591771] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:14.216 [2024-07-15 14:05:38.591782] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:14.216 [2024-07-15 14:05:38.591793] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:14.216 [2024-07-15 14:05:38.591803] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:14.216 [2024-07-15 14:05:38.591814] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:14.216 [2024-07-15 14:05:38.591825] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:14.216 [2024-07-15 14:05:38.591836] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:14.216 [2024-07-15 14:05:38.591850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:14.216 [2024-07-15 14:05:38.591862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:14.216 [2024-07-15 14:05:38.591874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:14.216 [2024-07-15 14:05:38.591885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:14.216 [2024-07-15 14:05:38.591897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:14.216 [2024-07-15 14:05:38.591908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:14.216 [2024-07-15 14:05:38.591919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:14.216 [2024-07-15 14:05:38.591931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:14.216 [2024-07-15 14:05:38.591942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:14.216 [2024-07-15 14:05:38.591953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:14.216 [2024-07-15 14:05:38.591964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:14.216 [2024-07-15 14:05:38.591975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:14.216 [2024-07-15 14:05:38.591986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:14.216 [2024-07-15 14:05:38.591997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:14.216 [2024-07-15 14:05:38.592009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:14.216 [2024-07-15 14:05:38.592020] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:14.216 [2024-07-15 14:05:38.592032] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:14.216 [2024-07-15 14:05:38.592044] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:14.216 [2024-07-15 14:05:38.592056] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:14.216 [2024-07-15 14:05:38.592067] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:14.216 [2024-07-15 14:05:38.592078] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:14.216 [2024-07-15 14:05:38.592091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.216 [2024-07-15 14:05:38.592107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:14.216 [2024-07-15 14:05:38.592119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.938 ms 00:24:14.216 [2024-07-15 14:05:38.592130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.216 [2024-07-15 14:05:38.631190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.217 [2024-07-15 14:05:38.631259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:14.217 [2024-07-15 14:05:38.631282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.973 ms 00:24:14.217 [2024-07-15 14:05:38.631294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.217 [2024-07-15 14:05:38.631447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.217 [2024-07-15 14:05:38.631466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:14.217 [2024-07-15 14:05:38.631479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:14.217 [2024-07-15 14:05:38.631489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.217 [2024-07-15 14:05:38.670402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.217 [2024-07-15 14:05:38.670459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:14.217 [2024-07-15 14:05:38.670478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.819 ms 00:24:14.217 [2024-07-15 14:05:38.670489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.217 [2024-07-15 14:05:38.670561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.217 [2024-07-15 14:05:38.670578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:14.217 [2024-07-15 14:05:38.670592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:14.217 [2024-07-15 14:05:38.670603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.217 [2024-07-15 14:05:38.671013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.217 [2024-07-15 14:05:38.671043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:14.217 [2024-07-15 14:05:38.671063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:24:14.217 [2024-07-15 14:05:38.671081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.217 [2024-07-15 14:05:38.671292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.217 [2024-07-15 14:05:38.671351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:14.217 [2024-07-15 14:05:38.671371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:24:14.217 [2024-07-15 14:05:38.671384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.217 [2024-07-15 14:05:38.687832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.217 [2024-07-15 14:05:38.687882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:14.217 [2024-07-15 14:05:38.687900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.414 ms 00:24:14.217 [2024-07-15 14:05:38.687912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.217 [2024-07-15 14:05:38.704430] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:24:14.217 [2024-07-15 14:05:38.704478] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:14.217 [2024-07-15 14:05:38.704503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.217 [2024-07-15 14:05:38.704516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:14.217 [2024-07-15 14:05:38.704530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.441 ms 00:24:14.217 [2024-07-15 14:05:38.704541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.217 [2024-07-15 14:05:38.735651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.217 [2024-07-15 14:05:38.735704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:14.217 [2024-07-15 14:05:38.735723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.058 ms 00:24:14.217 [2024-07-15 14:05:38.735736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.217 [2024-07-15 14:05:38.751467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.217 [2024-07-15 14:05:38.751514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:14.217 [2024-07-15 14:05:38.751532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.660 ms 00:24:14.217 [2024-07-15 14:05:38.751543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.475 [2024-07-15 14:05:38.766932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.475 [2024-07-15 14:05:38.766974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:14.475 [2024-07-15 14:05:38.766990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.342 ms 00:24:14.476 [2024-07-15 14:05:38.767001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.476 [2024-07-15 14:05:38.767831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.476 [2024-07-15 14:05:38.767868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:14.476 [2024-07-15 14:05:38.767883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.711 ms 00:24:14.476 [2024-07-15 14:05:38.767894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.476 [2024-07-15 14:05:38.840429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.476 [2024-07-15 14:05:38.840501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:14.476 [2024-07-15 14:05:38.840522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.508 ms 00:24:14.476 [2024-07-15 14:05:38.840534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.476 [2024-07-15 14:05:38.853130] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:14.476 [2024-07-15 14:05:38.855710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.476 [2024-07-15 14:05:38.855753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:14.476 [2024-07-15 14:05:38.855771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.102 ms 00:24:14.476 [2024-07-15 14:05:38.855783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.476 [2024-07-15 14:05:38.855901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.476 [2024-07-15 14:05:38.855921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:14.476 [2024-07-15 14:05:38.855935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:14.476 [2024-07-15 14:05:38.855946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.476 [2024-07-15 14:05:38.856033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.476 [2024-07-15 14:05:38.856052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:14.476 [2024-07-15 14:05:38.856070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:24:14.476 [2024-07-15 14:05:38.856081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.476 [2024-07-15 14:05:38.856113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.476 [2024-07-15 14:05:38.856129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:14.476 [2024-07-15 14:05:38.856141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:14.476 [2024-07-15 14:05:38.856151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.476 [2024-07-15 14:05:38.856191] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:14.476 [2024-07-15 14:05:38.856208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.476 [2024-07-15 14:05:38.856219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:14.476 [2024-07-15 14:05:38.856231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:14.476 [2024-07-15 14:05:38.856246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.476 [2024-07-15 14:05:38.890980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.476 [2024-07-15 14:05:38.891147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:14.476 [2024-07-15 14:05:38.891267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.708 ms 00:24:14.476 [2024-07-15 14:05:38.891404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.476 [2024-07-15 14:05:38.891533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.476 [2024-07-15 14:05:38.891655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:14.476 [2024-07-15 14:05:38.891771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:14.476 [2024-07-15 14:05:38.891821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.476 [2024-07-15 14:05:38.893060] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 330.194 ms, result 0 00:24:49.108  Copying: 28/1024 [MB] (28 MBps) Copying: 59/1024 [MB] (30 MBps) Copying: 88/1024 [MB] (29 MBps) Copying: 118/1024 [MB] (30 MBps) Copying: 146/1024 [MB] (28 MBps) Copying: 175/1024 [MB] (28 MBps) Copying: 204/1024 [MB] (29 MBps) Copying: 234/1024 [MB] (30 MBps) Copying: 264/1024 [MB] (30 MBps) Copying: 295/1024 [MB] (30 MBps) Copying: 326/1024 [MB] (30 MBps) Copying: 355/1024 [MB] (28 MBps) Copying: 385/1024 [MB] (30 MBps) Copying: 414/1024 [MB] (28 MBps) Copying: 441/1024 [MB] (27 MBps) Copying: 469/1024 [MB] (27 MBps) Copying: 498/1024 [MB] (29 MBps) Copying: 527/1024 [MB] (28 MBps) Copying: 557/1024 [MB] (30 MBps) Copying: 586/1024 [MB] (28 MBps) Copying: 614/1024 [MB] (28 MBps) Copying: 644/1024 [MB] (29 MBps) Copying: 674/1024 [MB] (30 MBps) Copying: 705/1024 [MB] (31 MBps) Copying: 736/1024 [MB] (30 MBps) Copying: 765/1024 [MB] (28 MBps) Copying: 794/1024 [MB] (28 MBps) Copying: 823/1024 [MB] (29 MBps) Copying: 854/1024 [MB] (30 MBps) Copying: 884/1024 [MB] (30 MBps) Copying: 912/1024 [MB] (28 MBps) Copying: 943/1024 [MB] (30 MBps) Copying: 973/1024 [MB] (30 MBps) Copying: 1004/1024 [MB] (31 MBps) Copying: 1024/1024 [MB] (average 29 MBps)[2024-07-15 14:06:13.535678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.108 [2024-07-15 14:06:13.535774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:49.108 [2024-07-15 14:06:13.535798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:49.108 [2024-07-15 14:06:13.535811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.108 [2024-07-15 14:06:13.535841] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:49.108 [2024-07-15 14:06:13.539325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.108 [2024-07-15 14:06:13.539369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:49.108 [2024-07-15 14:06:13.539385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.460 ms 00:24:49.108 [2024-07-15 14:06:13.539398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.108 [2024-07-15 14:06:13.540762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.108 [2024-07-15 14:06:13.540808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:49.108 [2024-07-15 14:06:13.540833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.331 ms 00:24:49.108 [2024-07-15 14:06:13.540845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.108 [2024-07-15 14:06:13.556765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.108 [2024-07-15 14:06:13.556814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:49.108 [2024-07-15 14:06:13.556833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.897 ms 00:24:49.108 [2024-07-15 14:06:13.556845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.108 [2024-07-15 14:06:13.563612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.108 [2024-07-15 14:06:13.563649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:49.108 [2024-07-15 14:06:13.563672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.725 ms 00:24:49.108 [2024-07-15 14:06:13.563684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.108 [2024-07-15 14:06:13.595025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.108 [2024-07-15 14:06:13.595076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:49.108 [2024-07-15 14:06:13.595093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.268 ms 00:24:49.108 [2024-07-15 14:06:13.595105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.108 [2024-07-15 14:06:13.612964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.108 [2024-07-15 14:06:13.613016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:49.108 [2024-07-15 14:06:13.613034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.810 ms 00:24:49.108 [2024-07-15 14:06:13.613046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.108 [2024-07-15 14:06:13.613212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.108 [2024-07-15 14:06:13.613235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:49.108 [2024-07-15 14:06:13.613249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:24:49.108 [2024-07-15 14:06:13.613260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.108 [2024-07-15 14:06:13.644836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.108 [2024-07-15 14:06:13.644898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:49.108 [2024-07-15 14:06:13.644919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.549 ms 00:24:49.108 [2024-07-15 14:06:13.644931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.366 [2024-07-15 14:06:13.676672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.366 [2024-07-15 14:06:13.676721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:49.366 [2024-07-15 14:06:13.676738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.694 ms 00:24:49.366 [2024-07-15 14:06:13.676750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.366 [2024-07-15 14:06:13.707870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.366 [2024-07-15 14:06:13.707935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:49.366 [2024-07-15 14:06:13.707953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.072 ms 00:24:49.366 [2024-07-15 14:06:13.707983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.366 [2024-07-15 14:06:13.739072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.366 [2024-07-15 14:06:13.739134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:49.366 [2024-07-15 14:06:13.739153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.976 ms 00:24:49.366 [2024-07-15 14:06:13.739165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.366 [2024-07-15 14:06:13.739236] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:49.366 [2024-07-15 14:06:13.739265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:49.366 [2024-07-15 14:06:13.739280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:49.366 [2024-07-15 14:06:13.739292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:49.366 [2024-07-15 14:06:13.739323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:49.366 [2024-07-15 14:06:13.739338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:49.366 [2024-07-15 14:06:13.739350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:49.366 [2024-07-15 14:06:13.739363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:49.366 [2024-07-15 14:06:13.739374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:49.366 [2024-07-15 14:06:13.739386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:49.366 [2024-07-15 14:06:13.739398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:49.366 [2024-07-15 14:06:13.739410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:49.366 [2024-07-15 14:06:13.739422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:49.366 [2024-07-15 14:06:13.739434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:49.366 [2024-07-15 14:06:13.739446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:49.366 [2024-07-15 14:06:13.739457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:49.366 [2024-07-15 14:06:13.739468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:49.366 [2024-07-15 14:06:13.739480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:49.366 [2024-07-15 14:06:13.739492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:49.366 [2024-07-15 14:06:13.739503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:49.366 [2024-07-15 14:06:13.739515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:49.366 [2024-07-15 14:06:13.739526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:49.366 [2024-07-15 14:06:13.739538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:49.366 [2024-07-15 14:06:13.739549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:49.366 [2024-07-15 14:06:13.739561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:49.366 [2024-07-15 14:06:13.739572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:49.366 [2024-07-15 14:06:13.739583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:49.366 [2024-07-15 14:06:13.739595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:49.366 [2024-07-15 14:06:13.739606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:49.366 [2024-07-15 14:06:13.739618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:49.366 [2024-07-15 14:06:13.739630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.739641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.739653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.739665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.739677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.739689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.739700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.739711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.739723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.739735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.739746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.739757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.739769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.739780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.739792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.739803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.739815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.739826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.739838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.739850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.739861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.739873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.739884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.739896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.739908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.739920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.739931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.739942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.739954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.739966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.739984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.739995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:49.367 [2024-07-15 14:06:13.740483] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:49.367 [2024-07-15 14:06:13.740494] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 48aaaeb7-ff59-47dc-b2d2-8bf1c9ea6f7e 00:24:49.367 [2024-07-15 14:06:13.740517] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:49.367 [2024-07-15 14:06:13.740527] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:49.367 [2024-07-15 14:06:13.740538] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:49.367 [2024-07-15 14:06:13.740558] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:49.367 [2024-07-15 14:06:13.740568] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:49.367 [2024-07-15 14:06:13.740579] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:49.367 [2024-07-15 14:06:13.740590] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:49.367 [2024-07-15 14:06:13.740600] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:49.367 [2024-07-15 14:06:13.740610] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:49.367 [2024-07-15 14:06:13.740621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.367 [2024-07-15 14:06:13.740632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:49.367 [2024-07-15 14:06:13.740644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.396 ms 00:24:49.367 [2024-07-15 14:06:13.740654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.367 [2024-07-15 14:06:13.757447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.367 [2024-07-15 14:06:13.757518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:49.367 [2024-07-15 14:06:13.757537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.740 ms 00:24:49.367 [2024-07-15 14:06:13.757563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.367 [2024-07-15 14:06:13.758006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.367 [2024-07-15 14:06:13.758041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:49.367 [2024-07-15 14:06:13.758055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:24:49.367 [2024-07-15 14:06:13.758065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.367 [2024-07-15 14:06:13.795050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.367 [2024-07-15 14:06:13.795130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:49.367 [2024-07-15 14:06:13.795149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.367 [2024-07-15 14:06:13.795161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.367 [2024-07-15 14:06:13.795252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.367 [2024-07-15 14:06:13.795269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:49.367 [2024-07-15 14:06:13.795281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.367 [2024-07-15 14:06:13.795292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.367 [2024-07-15 14:06:13.795417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.368 [2024-07-15 14:06:13.795445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:49.368 [2024-07-15 14:06:13.795458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.368 [2024-07-15 14:06:13.795469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.368 [2024-07-15 14:06:13.795492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.368 [2024-07-15 14:06:13.795506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:49.368 [2024-07-15 14:06:13.795517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.368 [2024-07-15 14:06:13.795527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.368 [2024-07-15 14:06:13.895072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.368 [2024-07-15 14:06:13.895173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:49.368 [2024-07-15 14:06:13.895195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.368 [2024-07-15 14:06:13.895207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.626 [2024-07-15 14:06:13.979809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.626 [2024-07-15 14:06:13.979876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:49.626 [2024-07-15 14:06:13.979895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.626 [2024-07-15 14:06:13.979907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.626 [2024-07-15 14:06:13.979989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.626 [2024-07-15 14:06:13.980008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:49.626 [2024-07-15 14:06:13.980031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.626 [2024-07-15 14:06:13.980052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.626 [2024-07-15 14:06:13.980099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.626 [2024-07-15 14:06:13.980114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:49.626 [2024-07-15 14:06:13.980125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.626 [2024-07-15 14:06:13.980136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.626 [2024-07-15 14:06:13.980255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.626 [2024-07-15 14:06:13.980276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:49.626 [2024-07-15 14:06:13.980289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.626 [2024-07-15 14:06:13.980331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.626 [2024-07-15 14:06:13.980387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.626 [2024-07-15 14:06:13.980420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:49.626 [2024-07-15 14:06:13.980433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.626 [2024-07-15 14:06:13.980445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.626 [2024-07-15 14:06:13.980490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.626 [2024-07-15 14:06:13.980505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:49.626 [2024-07-15 14:06:13.980517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.626 [2024-07-15 14:06:13.980527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.626 [2024-07-15 14:06:13.980586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.626 [2024-07-15 14:06:13.980603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:49.626 [2024-07-15 14:06:13.980615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.626 [2024-07-15 14:06:13.980626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.626 [2024-07-15 14:06:13.980767] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 445.051 ms, result 0 00:24:51.000 00:24:51.000 00:24:51.000 14:06:15 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:24:51.258 [2024-07-15 14:06:15.615676] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:24:51.258 [2024-07-15 14:06:15.615843] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82322 ] 00:24:51.258 [2024-07-15 14:06:15.787353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.516 [2024-07-15 14:06:15.976125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.773 [2024-07-15 14:06:16.286612] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:51.773 [2024-07-15 14:06:16.286691] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:52.033 [2024-07-15 14:06:16.446644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.033 [2024-07-15 14:06:16.446726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:52.033 [2024-07-15 14:06:16.446763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:52.033 [2024-07-15 14:06:16.446780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.033 [2024-07-15 14:06:16.446872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.033 [2024-07-15 14:06:16.446894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:52.033 [2024-07-15 14:06:16.446908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:24:52.033 [2024-07-15 14:06:16.446924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.033 [2024-07-15 14:06:16.446958] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:52.033 [2024-07-15 14:06:16.447950] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:52.033 [2024-07-15 14:06:16.447996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.033 [2024-07-15 14:06:16.448017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:52.033 [2024-07-15 14:06:16.448031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.045 ms 00:24:52.033 [2024-07-15 14:06:16.448043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.033 [2024-07-15 14:06:16.449325] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:52.033 [2024-07-15 14:06:16.466261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.033 [2024-07-15 14:06:16.466373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:52.033 [2024-07-15 14:06:16.466409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.933 ms 00:24:52.033 [2024-07-15 14:06:16.466429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.033 [2024-07-15 14:06:16.466566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.033 [2024-07-15 14:06:16.466587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:52.033 [2024-07-15 14:06:16.466607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:52.033 [2024-07-15 14:06:16.466620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.033 [2024-07-15 14:06:16.471718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.033 [2024-07-15 14:06:16.471775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:52.033 [2024-07-15 14:06:16.471795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.966 ms 00:24:52.033 [2024-07-15 14:06:16.471808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.033 [2024-07-15 14:06:16.471916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.033 [2024-07-15 14:06:16.471939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:52.033 [2024-07-15 14:06:16.471953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:52.033 [2024-07-15 14:06:16.471965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.033 [2024-07-15 14:06:16.472041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.033 [2024-07-15 14:06:16.472060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:52.033 [2024-07-15 14:06:16.472073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:52.033 [2024-07-15 14:06:16.472085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.033 [2024-07-15 14:06:16.472121] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:52.033 [2024-07-15 14:06:16.476440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.033 [2024-07-15 14:06:16.476480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:52.033 [2024-07-15 14:06:16.476498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.329 ms 00:24:52.033 [2024-07-15 14:06:16.476511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.033 [2024-07-15 14:06:16.476562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.033 [2024-07-15 14:06:16.476578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:52.033 [2024-07-15 14:06:16.476592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:52.033 [2024-07-15 14:06:16.476604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.033 [2024-07-15 14:06:16.476677] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:52.033 [2024-07-15 14:06:16.476711] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:52.033 [2024-07-15 14:06:16.476757] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:52.033 [2024-07-15 14:06:16.476780] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:24:52.033 [2024-07-15 14:06:16.476895] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:52.033 [2024-07-15 14:06:16.476913] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:52.033 [2024-07-15 14:06:16.476928] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:52.033 [2024-07-15 14:06:16.476944] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:52.033 [2024-07-15 14:06:16.476959] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:52.033 [2024-07-15 14:06:16.476972] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:52.033 [2024-07-15 14:06:16.476984] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:52.033 [2024-07-15 14:06:16.476996] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:52.033 [2024-07-15 14:06:16.477007] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:52.033 [2024-07-15 14:06:16.477019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.033 [2024-07-15 14:06:16.477036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:52.033 [2024-07-15 14:06:16.477050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:24:52.033 [2024-07-15 14:06:16.477062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.033 [2024-07-15 14:06:16.477161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.033 [2024-07-15 14:06:16.477177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:52.033 [2024-07-15 14:06:16.477190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:52.033 [2024-07-15 14:06:16.477202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.033 [2024-07-15 14:06:16.477326] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:52.033 [2024-07-15 14:06:16.477347] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:52.033 [2024-07-15 14:06:16.477366] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:52.033 [2024-07-15 14:06:16.477379] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.033 [2024-07-15 14:06:16.477392] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:52.033 [2024-07-15 14:06:16.477403] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:52.033 [2024-07-15 14:06:16.477415] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:52.033 [2024-07-15 14:06:16.477427] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:52.033 [2024-07-15 14:06:16.477438] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:52.033 [2024-07-15 14:06:16.477450] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:52.033 [2024-07-15 14:06:16.477461] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:52.033 [2024-07-15 14:06:16.477473] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:52.033 [2024-07-15 14:06:16.477484] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:52.033 [2024-07-15 14:06:16.477495] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:52.033 [2024-07-15 14:06:16.477507] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:52.033 [2024-07-15 14:06:16.477518] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.034 [2024-07-15 14:06:16.477529] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:52.034 [2024-07-15 14:06:16.477541] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:52.034 [2024-07-15 14:06:16.477552] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.034 [2024-07-15 14:06:16.477563] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:52.034 [2024-07-15 14:06:16.477589] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:52.034 [2024-07-15 14:06:16.477601] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:52.034 [2024-07-15 14:06:16.477612] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:52.034 [2024-07-15 14:06:16.477623] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:52.034 [2024-07-15 14:06:16.477634] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:52.034 [2024-07-15 14:06:16.477645] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:52.034 [2024-07-15 14:06:16.477656] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:52.034 [2024-07-15 14:06:16.477668] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:52.034 [2024-07-15 14:06:16.477679] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:52.034 [2024-07-15 14:06:16.477691] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:52.034 [2024-07-15 14:06:16.477702] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:52.034 [2024-07-15 14:06:16.477713] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:52.034 [2024-07-15 14:06:16.477724] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:52.034 [2024-07-15 14:06:16.477735] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:52.034 [2024-07-15 14:06:16.477746] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:52.034 [2024-07-15 14:06:16.477757] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:52.034 [2024-07-15 14:06:16.477769] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:52.034 [2024-07-15 14:06:16.477780] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:52.034 [2024-07-15 14:06:16.477791] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:52.034 [2024-07-15 14:06:16.477802] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.034 [2024-07-15 14:06:16.477813] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:52.034 [2024-07-15 14:06:16.477825] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:52.034 [2024-07-15 14:06:16.477836] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.034 [2024-07-15 14:06:16.477847] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:52.034 [2024-07-15 14:06:16.477858] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:52.034 [2024-07-15 14:06:16.477871] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:52.034 [2024-07-15 14:06:16.477882] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.034 [2024-07-15 14:06:16.477894] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:52.034 [2024-07-15 14:06:16.477906] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:52.034 [2024-07-15 14:06:16.477918] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:52.034 [2024-07-15 14:06:16.477929] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:52.034 [2024-07-15 14:06:16.477940] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:52.034 [2024-07-15 14:06:16.477952] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:52.034 [2024-07-15 14:06:16.477964] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:52.034 [2024-07-15 14:06:16.477979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:52.034 [2024-07-15 14:06:16.477993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:52.034 [2024-07-15 14:06:16.478005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:52.034 [2024-07-15 14:06:16.478017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:52.034 [2024-07-15 14:06:16.478030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:52.034 [2024-07-15 14:06:16.478042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:52.034 [2024-07-15 14:06:16.478060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:52.034 [2024-07-15 14:06:16.478073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:52.034 [2024-07-15 14:06:16.478085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:52.034 [2024-07-15 14:06:16.478098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:52.034 [2024-07-15 14:06:16.478110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:52.034 [2024-07-15 14:06:16.478122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:52.034 [2024-07-15 14:06:16.478135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:52.034 [2024-07-15 14:06:16.478147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:52.034 [2024-07-15 14:06:16.478159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:52.034 [2024-07-15 14:06:16.478171] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:52.034 [2024-07-15 14:06:16.478185] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:52.034 [2024-07-15 14:06:16.478199] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:52.034 [2024-07-15 14:06:16.478211] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:52.034 [2024-07-15 14:06:16.478223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:52.034 [2024-07-15 14:06:16.478235] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:52.034 [2024-07-15 14:06:16.478248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.034 [2024-07-15 14:06:16.478266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:52.034 [2024-07-15 14:06:16.478279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.006 ms 00:24:52.034 [2024-07-15 14:06:16.478291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.034 [2024-07-15 14:06:16.530728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.034 [2024-07-15 14:06:16.530807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:52.034 [2024-07-15 14:06:16.530834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.359 ms 00:24:52.034 [2024-07-15 14:06:16.530851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.034 [2024-07-15 14:06:16.530994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.034 [2024-07-15 14:06:16.531014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:52.034 [2024-07-15 14:06:16.531030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:24:52.034 [2024-07-15 14:06:16.531045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.034 [2024-07-15 14:06:16.577926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.034 [2024-07-15 14:06:16.578002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:52.034 [2024-07-15 14:06:16.578027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.772 ms 00:24:52.034 [2024-07-15 14:06:16.578043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.034 [2024-07-15 14:06:16.578126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.034 [2024-07-15 14:06:16.578146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:52.034 [2024-07-15 14:06:16.578163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:52.034 [2024-07-15 14:06:16.578178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.034 [2024-07-15 14:06:16.578671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.034 [2024-07-15 14:06:16.578712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:52.034 [2024-07-15 14:06:16.578745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:24:52.034 [2024-07-15 14:06:16.578774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.294 [2024-07-15 14:06:16.579048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.294 [2024-07-15 14:06:16.579122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:52.294 [2024-07-15 14:06:16.579156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:24:52.294 [2024-07-15 14:06:16.579184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.294 [2024-07-15 14:06:16.599453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.294 [2024-07-15 14:06:16.599508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:52.294 [2024-07-15 14:06:16.599531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.212 ms 00:24:52.294 [2024-07-15 14:06:16.599548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.294 [2024-07-15 14:06:16.619510] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:52.294 [2024-07-15 14:06:16.619567] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:52.294 [2024-07-15 14:06:16.619592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.294 [2024-07-15 14:06:16.619608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:52.294 [2024-07-15 14:06:16.619625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.876 ms 00:24:52.294 [2024-07-15 14:06:16.619640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.294 [2024-07-15 14:06:16.656131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.294 [2024-07-15 14:06:16.656196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:52.294 [2024-07-15 14:06:16.656221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.434 ms 00:24:52.294 [2024-07-15 14:06:16.656247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.294 [2024-07-15 14:06:16.677059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.294 [2024-07-15 14:06:16.677132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:52.294 [2024-07-15 14:06:16.677156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.760 ms 00:24:52.294 [2024-07-15 14:06:16.677173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.294 [2024-07-15 14:06:16.697639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.294 [2024-07-15 14:06:16.697714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:52.294 [2024-07-15 14:06:16.697739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.390 ms 00:24:52.294 [2024-07-15 14:06:16.697754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.294 [2024-07-15 14:06:16.698907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.294 [2024-07-15 14:06:16.698958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:52.294 [2024-07-15 14:06:16.698979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.959 ms 00:24:52.294 [2024-07-15 14:06:16.698994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.294 [2024-07-15 14:06:16.785594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.294 [2024-07-15 14:06:16.785675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:52.294 [2024-07-15 14:06:16.785702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.553 ms 00:24:52.294 [2024-07-15 14:06:16.785718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.294 [2024-07-15 14:06:16.801554] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:52.294 [2024-07-15 14:06:16.805018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.294 [2024-07-15 14:06:16.805080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:52.294 [2024-07-15 14:06:16.805107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.210 ms 00:24:52.294 [2024-07-15 14:06:16.805122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.294 [2024-07-15 14:06:16.805262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.294 [2024-07-15 14:06:16.805287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:52.294 [2024-07-15 14:06:16.805329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:52.294 [2024-07-15 14:06:16.805349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.294 [2024-07-15 14:06:16.805455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.294 [2024-07-15 14:06:16.805485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:52.294 [2024-07-15 14:06:16.805500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:52.294 [2024-07-15 14:06:16.805515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.294 [2024-07-15 14:06:16.805555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.294 [2024-07-15 14:06:16.805574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:52.294 [2024-07-15 14:06:16.805589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:52.294 [2024-07-15 14:06:16.805603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.294 [2024-07-15 14:06:16.805649] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:52.294 [2024-07-15 14:06:16.805680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.294 [2024-07-15 14:06:16.805714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:52.294 [2024-07-15 14:06:16.805750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:52.294 [2024-07-15 14:06:16.805778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.553 [2024-07-15 14:06:16.843864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.553 [2024-07-15 14:06:16.843939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:52.553 [2024-07-15 14:06:16.843965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.031 ms 00:24:52.553 [2024-07-15 14:06:16.843980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.553 [2024-07-15 14:06:16.844086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.553 [2024-07-15 14:06:16.844121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:52.553 [2024-07-15 14:06:16.844138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:24:52.553 [2024-07-15 14:06:16.844153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.553 [2024-07-15 14:06:16.845798] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 398.493 ms, result 0 00:25:32.886  Copying: 27/1024 [MB] (27 MBps) Copying: 55/1024 [MB] (27 MBps) Copying: 79/1024 [MB] (24 MBps) Copying: 107/1024 [MB] (27 MBps) Copying: 136/1024 [MB] (28 MBps) Copying: 164/1024 [MB] (27 MBps) Copying: 191/1024 [MB] (26 MBps) Copying: 218/1024 [MB] (27 MBps) Copying: 246/1024 [MB] (28 MBps) Copying: 274/1024 [MB] (27 MBps) Copying: 301/1024 [MB] (27 MBps) Copying: 329/1024 [MB] (28 MBps) Copying: 356/1024 [MB] (26 MBps) Copying: 386/1024 [MB] (30 MBps) Copying: 416/1024 [MB] (30 MBps) Copying: 444/1024 [MB] (27 MBps) Copying: 473/1024 [MB] (28 MBps) Copying: 503/1024 [MB] (30 MBps) Copying: 530/1024 [MB] (26 MBps) Copying: 557/1024 [MB] (27 MBps) Copying: 586/1024 [MB] (28 MBps) Copying: 609/1024 [MB] (22 MBps) Copying: 635/1024 [MB] (26 MBps) Copying: 661/1024 [MB] (26 MBps) Copying: 689/1024 [MB] (27 MBps) Copying: 716/1024 [MB] (27 MBps) Copying: 742/1024 [MB] (26 MBps) Copying: 765/1024 [MB] (22 MBps) Copying: 793/1024 [MB] (27 MBps) Copying: 811/1024 [MB] (17 MBps) Copying: 839/1024 [MB] (28 MBps) Copying: 862/1024 [MB] (22 MBps) Copying: 879/1024 [MB] (16 MBps) Copying: 897/1024 [MB] (18 MBps) Copying: 917/1024 [MB] (19 MBps) Copying: 936/1024 [MB] (18 MBps) Copying: 953/1024 [MB] (16 MBps) Copying: 974/1024 [MB] (21 MBps) Copying: 998/1024 [MB] (23 MBps) Copying: 1023/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-15 14:06:57.215115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.886 [2024-07-15 14:06:57.215260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:32.886 [2024-07-15 14:06:57.215282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:32.886 [2024-07-15 14:06:57.215295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.886 [2024-07-15 14:06:57.215405] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:32.886 [2024-07-15 14:06:57.218846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.886 [2024-07-15 14:06:57.218890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:32.886 [2024-07-15 14:06:57.218908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.414 ms 00:25:32.886 [2024-07-15 14:06:57.218921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.886 [2024-07-15 14:06:57.219178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.886 [2024-07-15 14:06:57.219206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:32.886 [2024-07-15 14:06:57.219221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:25:32.886 [2024-07-15 14:06:57.219233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.886 [2024-07-15 14:06:57.222805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.886 [2024-07-15 14:06:57.222844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:32.886 [2024-07-15 14:06:57.222861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.549 ms 00:25:32.886 [2024-07-15 14:06:57.222873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.886 [2024-07-15 14:06:57.229682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.886 [2024-07-15 14:06:57.229730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:32.886 [2024-07-15 14:06:57.229756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.782 ms 00:25:32.886 [2024-07-15 14:06:57.229770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.886 [2024-07-15 14:06:57.261930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.886 [2024-07-15 14:06:57.262023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:32.886 [2024-07-15 14:06:57.262048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.068 ms 00:25:32.886 [2024-07-15 14:06:57.262061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.886 [2024-07-15 14:06:57.280285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.886 [2024-07-15 14:06:57.280357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:32.886 [2024-07-15 14:06:57.280380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.180 ms 00:25:32.886 [2024-07-15 14:06:57.280393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.886 [2024-07-15 14:06:57.280547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.886 [2024-07-15 14:06:57.280567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:32.886 [2024-07-15 14:06:57.280582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:25:32.886 [2024-07-15 14:06:57.280600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.886 [2024-07-15 14:06:57.312754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.886 [2024-07-15 14:06:57.312813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:32.886 [2024-07-15 14:06:57.312834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.130 ms 00:25:32.886 [2024-07-15 14:06:57.312846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.886 [2024-07-15 14:06:57.345003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.886 [2024-07-15 14:06:57.345084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:32.886 [2024-07-15 14:06:57.345115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.121 ms 00:25:32.886 [2024-07-15 14:06:57.345128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.886 [2024-07-15 14:06:57.378158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.887 [2024-07-15 14:06:57.378235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:32.887 [2024-07-15 14:06:57.378277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.992 ms 00:25:32.887 [2024-07-15 14:06:57.378291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.887 [2024-07-15 14:06:57.409736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.887 [2024-07-15 14:06:57.409799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:32.887 [2024-07-15 14:06:57.409821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.322 ms 00:25:32.887 [2024-07-15 14:06:57.409834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.887 [2024-07-15 14:06:57.409869] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:32.887 [2024-07-15 14:06:57.409892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.409907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.409927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.409941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.409955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.409967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.409980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.409993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.410996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.411008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.411020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.411033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.411045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.411058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.411071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.411083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.411096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.411108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.411121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:32.887 [2024-07-15 14:06:57.411133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:32.888 [2024-07-15 14:06:57.411146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:32.888 [2024-07-15 14:06:57.411159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:32.888 [2024-07-15 14:06:57.411174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:32.888 [2024-07-15 14:06:57.411197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:32.888 [2024-07-15 14:06:57.411211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:32.888 [2024-07-15 14:06:57.411224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:32.888 [2024-07-15 14:06:57.411237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:32.888 [2024-07-15 14:06:57.411249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:32.888 [2024-07-15 14:06:57.411262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:32.888 [2024-07-15 14:06:57.411275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:32.888 [2024-07-15 14:06:57.411287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:32.888 [2024-07-15 14:06:57.411326] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:32.888 [2024-07-15 14:06:57.411346] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 48aaaeb7-ff59-47dc-b2d2-8bf1c9ea6f7e 00:25:32.888 [2024-07-15 14:06:57.411359] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:32.888 [2024-07-15 14:06:57.411371] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:32.888 [2024-07-15 14:06:57.411390] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:32.888 [2024-07-15 14:06:57.411403] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:32.888 [2024-07-15 14:06:57.411414] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:32.888 [2024-07-15 14:06:57.411426] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:32.888 [2024-07-15 14:06:57.411439] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:32.888 [2024-07-15 14:06:57.411450] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:32.888 [2024-07-15 14:06:57.411460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:32.888 [2024-07-15 14:06:57.411472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.888 [2024-07-15 14:06:57.411485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:32.888 [2024-07-15 14:06:57.411498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.605 ms 00:25:32.888 [2024-07-15 14:06:57.411510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.146 [2024-07-15 14:06:57.432805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.146 [2024-07-15 14:06:57.432886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:33.146 [2024-07-15 14:06:57.432933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.232 ms 00:25:33.146 [2024-07-15 14:06:57.432953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.146 [2024-07-15 14:06:57.433618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.146 [2024-07-15 14:06:57.433662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:33.146 [2024-07-15 14:06:57.433686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.601 ms 00:25:33.146 [2024-07-15 14:06:57.433706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.146 [2024-07-15 14:06:57.482755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.146 [2024-07-15 14:06:57.482831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:33.146 [2024-07-15 14:06:57.482852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.146 [2024-07-15 14:06:57.482879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.146 [2024-07-15 14:06:57.482961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.146 [2024-07-15 14:06:57.482977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:33.146 [2024-07-15 14:06:57.482990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.146 [2024-07-15 14:06:57.483002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.146 [2024-07-15 14:06:57.483105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.146 [2024-07-15 14:06:57.483125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:33.146 [2024-07-15 14:06:57.483139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.146 [2024-07-15 14:06:57.483150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.146 [2024-07-15 14:06:57.483173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.146 [2024-07-15 14:06:57.483187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:33.146 [2024-07-15 14:06:57.483199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.146 [2024-07-15 14:06:57.483211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.146 [2024-07-15 14:06:57.582710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.146 [2024-07-15 14:06:57.582786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:33.146 [2024-07-15 14:06:57.582807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.146 [2024-07-15 14:06:57.582820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.146 [2024-07-15 14:06:57.668468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.146 [2024-07-15 14:06:57.668544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:33.146 [2024-07-15 14:06:57.668566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.146 [2024-07-15 14:06:57.668580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.146 [2024-07-15 14:06:57.668663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.146 [2024-07-15 14:06:57.668691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:33.146 [2024-07-15 14:06:57.668721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.146 [2024-07-15 14:06:57.668735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.146 [2024-07-15 14:06:57.668795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.146 [2024-07-15 14:06:57.668811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:33.146 [2024-07-15 14:06:57.668824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.146 [2024-07-15 14:06:57.668836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.146 [2024-07-15 14:06:57.668966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.146 [2024-07-15 14:06:57.668990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:33.146 [2024-07-15 14:06:57.669011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.146 [2024-07-15 14:06:57.669023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.146 [2024-07-15 14:06:57.669075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.146 [2024-07-15 14:06:57.669095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:33.146 [2024-07-15 14:06:57.669115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.146 [2024-07-15 14:06:57.669134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.146 [2024-07-15 14:06:57.669193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.146 [2024-07-15 14:06:57.669211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:33.146 [2024-07-15 14:06:57.669224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.146 [2024-07-15 14:06:57.669252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.146 [2024-07-15 14:06:57.669347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.146 [2024-07-15 14:06:57.669369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:33.146 [2024-07-15 14:06:57.669382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.146 [2024-07-15 14:06:57.669395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.146 [2024-07-15 14:06:57.669564] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 454.416 ms, result 0 00:25:34.517 00:25:34.517 00:25:34.517 14:06:58 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:37.045 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:37.045 14:07:01 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:25:37.045 [2024-07-15 14:07:01.116931] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:25:37.045 [2024-07-15 14:07:01.117086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82770 ] 00:25:37.045 [2024-07-15 14:07:01.277615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.045 [2024-07-15 14:07:01.513437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.304 [2024-07-15 14:07:01.845506] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:37.304 [2024-07-15 14:07:01.845616] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:37.564 [2024-07-15 14:07:02.008033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.564 [2024-07-15 14:07:02.008109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:37.564 [2024-07-15 14:07:02.008131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:37.564 [2024-07-15 14:07:02.008143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.564 [2024-07-15 14:07:02.008218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.564 [2024-07-15 14:07:02.008239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:37.564 [2024-07-15 14:07:02.008252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:25:37.564 [2024-07-15 14:07:02.008267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.564 [2024-07-15 14:07:02.008298] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:37.564 [2024-07-15 14:07:02.009237] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:37.564 [2024-07-15 14:07:02.009270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.564 [2024-07-15 14:07:02.009287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:37.564 [2024-07-15 14:07:02.009313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:25:37.564 [2024-07-15 14:07:02.009327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.564 [2024-07-15 14:07:02.010550] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:37.564 [2024-07-15 14:07:02.026830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.564 [2024-07-15 14:07:02.026880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:37.564 [2024-07-15 14:07:02.026900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.281 ms 00:25:37.564 [2024-07-15 14:07:02.026912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.564 [2024-07-15 14:07:02.026989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.564 [2024-07-15 14:07:02.027008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:37.564 [2024-07-15 14:07:02.027024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:37.564 [2024-07-15 14:07:02.027035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.564 [2024-07-15 14:07:02.031683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.564 [2024-07-15 14:07:02.031898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:37.564 [2024-07-15 14:07:02.032031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.551 ms 00:25:37.564 [2024-07-15 14:07:02.032082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.564 [2024-07-15 14:07:02.032284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.564 [2024-07-15 14:07:02.032342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:37.564 [2024-07-15 14:07:02.032358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:25:37.564 [2024-07-15 14:07:02.032370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.564 [2024-07-15 14:07:02.032445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.564 [2024-07-15 14:07:02.032463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:37.564 [2024-07-15 14:07:02.032476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:25:37.564 [2024-07-15 14:07:02.032487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.564 [2024-07-15 14:07:02.032523] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:37.564 [2024-07-15 14:07:02.036801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.564 [2024-07-15 14:07:02.036842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:37.564 [2024-07-15 14:07:02.036858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.288 ms 00:25:37.564 [2024-07-15 14:07:02.036869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.564 [2024-07-15 14:07:02.036916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.564 [2024-07-15 14:07:02.036933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:37.564 [2024-07-15 14:07:02.036945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:37.564 [2024-07-15 14:07:02.036956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.564 [2024-07-15 14:07:02.037004] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:37.564 [2024-07-15 14:07:02.037034] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:37.564 [2024-07-15 14:07:02.037078] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:37.564 [2024-07-15 14:07:02.037101] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:25:37.564 [2024-07-15 14:07:02.037207] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:37.564 [2024-07-15 14:07:02.037222] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:37.564 [2024-07-15 14:07:02.037236] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:37.564 [2024-07-15 14:07:02.037251] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:37.564 [2024-07-15 14:07:02.037264] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:37.564 [2024-07-15 14:07:02.037276] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:37.564 [2024-07-15 14:07:02.037287] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:37.564 [2024-07-15 14:07:02.037297] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:37.564 [2024-07-15 14:07:02.037331] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:37.564 [2024-07-15 14:07:02.037344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.564 [2024-07-15 14:07:02.037360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:37.564 [2024-07-15 14:07:02.037373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:25:37.564 [2024-07-15 14:07:02.037383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.564 [2024-07-15 14:07:02.037496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.564 [2024-07-15 14:07:02.037511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:37.564 [2024-07-15 14:07:02.037523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:25:37.564 [2024-07-15 14:07:02.037533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.564 [2024-07-15 14:07:02.037641] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:37.564 [2024-07-15 14:07:02.037657] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:37.564 [2024-07-15 14:07:02.037674] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:37.564 [2024-07-15 14:07:02.037686] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.564 [2024-07-15 14:07:02.037697] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:37.564 [2024-07-15 14:07:02.037707] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:37.565 [2024-07-15 14:07:02.037718] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:37.565 [2024-07-15 14:07:02.037728] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:37.565 [2024-07-15 14:07:02.037745] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:37.565 [2024-07-15 14:07:02.037758] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:37.565 [2024-07-15 14:07:02.037768] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:37.565 [2024-07-15 14:07:02.037779] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:37.565 [2024-07-15 14:07:02.037789] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:37.565 [2024-07-15 14:07:02.037799] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:37.565 [2024-07-15 14:07:02.037809] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:37.565 [2024-07-15 14:07:02.037821] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.565 [2024-07-15 14:07:02.037831] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:37.565 [2024-07-15 14:07:02.037842] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:37.565 [2024-07-15 14:07:02.037852] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.565 [2024-07-15 14:07:02.037862] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:37.565 [2024-07-15 14:07:02.037886] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:37.565 [2024-07-15 14:07:02.037896] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:37.565 [2024-07-15 14:07:02.037907] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:37.565 [2024-07-15 14:07:02.037917] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:37.565 [2024-07-15 14:07:02.037927] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:37.565 [2024-07-15 14:07:02.037937] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:37.565 [2024-07-15 14:07:02.037947] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:37.565 [2024-07-15 14:07:02.037957] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:37.565 [2024-07-15 14:07:02.037967] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:37.565 [2024-07-15 14:07:02.037976] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:37.565 [2024-07-15 14:07:02.037986] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:37.565 [2024-07-15 14:07:02.037996] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:37.565 [2024-07-15 14:07:02.038006] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:37.565 [2024-07-15 14:07:02.038016] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:37.565 [2024-07-15 14:07:02.038026] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:37.565 [2024-07-15 14:07:02.038035] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:37.565 [2024-07-15 14:07:02.038045] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:37.565 [2024-07-15 14:07:02.038055] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:37.565 [2024-07-15 14:07:02.038065] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:37.565 [2024-07-15 14:07:02.038075] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.565 [2024-07-15 14:07:02.038084] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:37.565 [2024-07-15 14:07:02.038094] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:37.565 [2024-07-15 14:07:02.038104] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.565 [2024-07-15 14:07:02.038113] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:37.565 [2024-07-15 14:07:02.038124] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:37.565 [2024-07-15 14:07:02.038134] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:37.565 [2024-07-15 14:07:02.038145] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.565 [2024-07-15 14:07:02.038157] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:37.565 [2024-07-15 14:07:02.038168] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:37.565 [2024-07-15 14:07:02.038178] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:37.565 [2024-07-15 14:07:02.038189] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:37.565 [2024-07-15 14:07:02.038199] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:37.565 [2024-07-15 14:07:02.038209] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:37.565 [2024-07-15 14:07:02.038220] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:37.565 [2024-07-15 14:07:02.038234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:37.565 [2024-07-15 14:07:02.038246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:37.565 [2024-07-15 14:07:02.038257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:37.565 [2024-07-15 14:07:02.038268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:37.565 [2024-07-15 14:07:02.038279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:37.565 [2024-07-15 14:07:02.038290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:37.565 [2024-07-15 14:07:02.038315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:37.565 [2024-07-15 14:07:02.038329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:37.565 [2024-07-15 14:07:02.038347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:37.565 [2024-07-15 14:07:02.038359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:37.565 [2024-07-15 14:07:02.038370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:37.565 [2024-07-15 14:07:02.038381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:37.565 [2024-07-15 14:07:02.038392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:37.565 [2024-07-15 14:07:02.038403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:37.565 [2024-07-15 14:07:02.038414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:37.565 [2024-07-15 14:07:02.038425] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:37.565 [2024-07-15 14:07:02.038448] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:37.565 [2024-07-15 14:07:02.038461] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:37.565 [2024-07-15 14:07:02.038472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:37.565 [2024-07-15 14:07:02.038483] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:37.565 [2024-07-15 14:07:02.038494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:37.565 [2024-07-15 14:07:02.038507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.565 [2024-07-15 14:07:02.038524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:37.565 [2024-07-15 14:07:02.038536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.933 ms 00:25:37.565 [2024-07-15 14:07:02.038547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.565 [2024-07-15 14:07:02.082623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.565 [2024-07-15 14:07:02.082703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:37.565 [2024-07-15 14:07:02.082726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.998 ms 00:25:37.565 [2024-07-15 14:07:02.082738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.565 [2024-07-15 14:07:02.082864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.565 [2024-07-15 14:07:02.082881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:37.565 [2024-07-15 14:07:02.082894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:37.565 [2024-07-15 14:07:02.082905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.823 [2024-07-15 14:07:02.123465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.823 [2024-07-15 14:07:02.123544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:37.823 [2024-07-15 14:07:02.123566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.438 ms 00:25:37.823 [2024-07-15 14:07:02.123578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.823 [2024-07-15 14:07:02.123662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.823 [2024-07-15 14:07:02.123679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:37.823 [2024-07-15 14:07:02.123694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:37.823 [2024-07-15 14:07:02.123705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.823 [2024-07-15 14:07:02.124115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.823 [2024-07-15 14:07:02.124135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:37.823 [2024-07-15 14:07:02.124148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:25:37.823 [2024-07-15 14:07:02.124159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.823 [2024-07-15 14:07:02.124359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.823 [2024-07-15 14:07:02.124382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:37.823 [2024-07-15 14:07:02.124395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:25:37.823 [2024-07-15 14:07:02.124406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.823 [2024-07-15 14:07:02.140880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.823 [2024-07-15 14:07:02.140962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:37.823 [2024-07-15 14:07:02.140983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.442 ms 00:25:37.823 [2024-07-15 14:07:02.140995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.823 [2024-07-15 14:07:02.158100] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:37.823 [2024-07-15 14:07:02.158188] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:37.823 [2024-07-15 14:07:02.158213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.823 [2024-07-15 14:07:02.158226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:37.823 [2024-07-15 14:07:02.158243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.023 ms 00:25:37.823 [2024-07-15 14:07:02.158255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.823 [2024-07-15 14:07:02.189247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.823 [2024-07-15 14:07:02.189358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:37.823 [2024-07-15 14:07:02.189381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.878 ms 00:25:37.823 [2024-07-15 14:07:02.189405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.823 [2024-07-15 14:07:02.205593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.823 [2024-07-15 14:07:02.205649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:37.823 [2024-07-15 14:07:02.205668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.100 ms 00:25:37.823 [2024-07-15 14:07:02.205680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.823 [2024-07-15 14:07:02.225515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.823 [2024-07-15 14:07:02.225608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:37.823 [2024-07-15 14:07:02.225630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.771 ms 00:25:37.823 [2024-07-15 14:07:02.225642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.823 [2024-07-15 14:07:02.226620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.823 [2024-07-15 14:07:02.226669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:37.823 [2024-07-15 14:07:02.226687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:25:37.823 [2024-07-15 14:07:02.226698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.823 [2024-07-15 14:07:02.302853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.823 [2024-07-15 14:07:02.302934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:37.823 [2024-07-15 14:07:02.302958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.125 ms 00:25:37.823 [2024-07-15 14:07:02.302970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.823 [2024-07-15 14:07:02.315879] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:37.823 [2024-07-15 14:07:02.318689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.823 [2024-07-15 14:07:02.318730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:37.823 [2024-07-15 14:07:02.318750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.638 ms 00:25:37.823 [2024-07-15 14:07:02.318761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.823 [2024-07-15 14:07:02.318880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.823 [2024-07-15 14:07:02.318901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:37.823 [2024-07-15 14:07:02.318916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:37.823 [2024-07-15 14:07:02.318927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.823 [2024-07-15 14:07:02.319019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.823 [2024-07-15 14:07:02.319044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:37.823 [2024-07-15 14:07:02.319057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:37.823 [2024-07-15 14:07:02.319069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.823 [2024-07-15 14:07:02.319104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.823 [2024-07-15 14:07:02.319129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:37.823 [2024-07-15 14:07:02.319145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:37.823 [2024-07-15 14:07:02.319158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.823 [2024-07-15 14:07:02.319202] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:37.823 [2024-07-15 14:07:02.319219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.823 [2024-07-15 14:07:02.319231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:37.823 [2024-07-15 14:07:02.319247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:37.823 [2024-07-15 14:07:02.319259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.823 [2024-07-15 14:07:02.350575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.823 [2024-07-15 14:07:02.350636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:37.823 [2024-07-15 14:07:02.350656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.286 ms 00:25:37.823 [2024-07-15 14:07:02.350667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.823 [2024-07-15 14:07:02.350760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.823 [2024-07-15 14:07:02.350794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:37.823 [2024-07-15 14:07:02.350807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:25:37.823 [2024-07-15 14:07:02.350819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.823 [2024-07-15 14:07:02.351967] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 343.432 ms, result 0 00:26:14.448  Copying: 28/1024 [MB] (28 MBps) Copying: 57/1024 [MB] (29 MBps) Copying: 84/1024 [MB] (27 MBps) Copying: 113/1024 [MB] (28 MBps) Copying: 144/1024 [MB] (30 MBps) Copying: 174/1024 [MB] (30 MBps) Copying: 203/1024 [MB] (28 MBps) Copying: 233/1024 [MB] (29 MBps) Copying: 262/1024 [MB] (29 MBps) Copying: 291/1024 [MB] (29 MBps) Copying: 319/1024 [MB] (27 MBps) Copying: 348/1024 [MB] (29 MBps) Copying: 377/1024 [MB] (29 MBps) Copying: 405/1024 [MB] (28 MBps) Copying: 436/1024 [MB] (30 MBps) Copying: 465/1024 [MB] (29 MBps) Copying: 495/1024 [MB] (29 MBps) Copying: 524/1024 [MB] (28 MBps) Copying: 553/1024 [MB] (29 MBps) Copying: 581/1024 [MB] (27 MBps) Copying: 609/1024 [MB] (28 MBps) Copying: 638/1024 [MB] (29 MBps) Copying: 668/1024 [MB] (29 MBps) Copying: 698/1024 [MB] (30 MBps) Copying: 728/1024 [MB] (29 MBps) Copying: 756/1024 [MB] (28 MBps) Copying: 787/1024 [MB] (30 MBps) Copying: 817/1024 [MB] (29 MBps) Copying: 846/1024 [MB] (29 MBps) Copying: 874/1024 [MB] (27 MBps) Copying: 904/1024 [MB] (30 MBps) Copying: 934/1024 [MB] (29 MBps) Copying: 963/1024 [MB] (29 MBps) Copying: 988/1024 [MB] (25 MBps) Copying: 1017/1024 [MB] (28 MBps) Copying: 1048152/1048576 [kB] (6428 kBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-15 14:07:38.944437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.448 [2024-07-15 14:07:38.944531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:14.448 [2024-07-15 14:07:38.944554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:14.448 [2024-07-15 14:07:38.944566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.448 [2024-07-15 14:07:38.946347] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:14.448 [2024-07-15 14:07:38.954754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.448 [2024-07-15 14:07:38.954801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:14.448 [2024-07-15 14:07:38.954820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.343 ms 00:26:14.449 [2024-07-15 14:07:38.954831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.449 [2024-07-15 14:07:38.965521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.449 [2024-07-15 14:07:38.965593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:14.449 [2024-07-15 14:07:38.965614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.353 ms 00:26:14.449 [2024-07-15 14:07:38.965626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.449 [2024-07-15 14:07:38.986298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.449 [2024-07-15 14:07:38.986385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:14.449 [2024-07-15 14:07:38.986420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.643 ms 00:26:14.449 [2024-07-15 14:07:38.986432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.449 [2024-07-15 14:07:38.993185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.449 [2024-07-15 14:07:38.993228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:14.449 [2024-07-15 14:07:38.993244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.698 ms 00:26:14.449 [2024-07-15 14:07:38.993257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.707 [2024-07-15 14:07:39.024784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.707 [2024-07-15 14:07:39.024855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:14.707 [2024-07-15 14:07:39.024875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.422 ms 00:26:14.707 [2024-07-15 14:07:39.024886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.707 [2024-07-15 14:07:39.042987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.707 [2024-07-15 14:07:39.043048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:14.707 [2024-07-15 14:07:39.043069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.038 ms 00:26:14.707 [2024-07-15 14:07:39.043094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.707 [2024-07-15 14:07:39.124412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.707 [2024-07-15 14:07:39.124534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:14.707 [2024-07-15 14:07:39.124559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.254 ms 00:26:14.707 [2024-07-15 14:07:39.124572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.707 [2024-07-15 14:07:39.156852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.707 [2024-07-15 14:07:39.156914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:14.707 [2024-07-15 14:07:39.156933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.255 ms 00:26:14.707 [2024-07-15 14:07:39.156945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.707 [2024-07-15 14:07:39.188669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.707 [2024-07-15 14:07:39.188729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:14.707 [2024-07-15 14:07:39.188748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.670 ms 00:26:14.707 [2024-07-15 14:07:39.188759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.707 [2024-07-15 14:07:39.219933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.707 [2024-07-15 14:07:39.219989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:14.707 [2024-07-15 14:07:39.220024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.125 ms 00:26:14.707 [2024-07-15 14:07:39.220035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.707 [2024-07-15 14:07:39.251555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.707 [2024-07-15 14:07:39.251617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:14.707 [2024-07-15 14:07:39.251635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.405 ms 00:26:14.707 [2024-07-15 14:07:39.251647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.707 [2024-07-15 14:07:39.251706] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:14.707 [2024-07-15 14:07:39.251730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 125184 / 261120 wr_cnt: 1 state: open 00:26:14.707 [2024-07-15 14:07:39.251744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:14.707 [2024-07-15 14:07:39.251764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:14.707 [2024-07-15 14:07:39.251778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:14.707 [2024-07-15 14:07:39.251790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:14.707 [2024-07-15 14:07:39.251801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:14.707 [2024-07-15 14:07:39.251813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:14.707 [2024-07-15 14:07:39.251825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:14.707 [2024-07-15 14:07:39.251837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:14.707 [2024-07-15 14:07:39.251849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:14.707 [2024-07-15 14:07:39.251861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:14.707 [2024-07-15 14:07:39.251872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:14.707 [2024-07-15 14:07:39.251884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:14.707 [2024-07-15 14:07:39.251896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:14.707 [2024-07-15 14:07:39.251911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:14.707 [2024-07-15 14:07:39.251923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:14.707 [2024-07-15 14:07:39.251935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.251947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.251959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.251971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.251983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.251994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:14.708 [2024-07-15 14:07:39.252994] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:14.708 [2024-07-15 14:07:39.253006] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 48aaaeb7-ff59-47dc-b2d2-8bf1c9ea6f7e 00:26:14.708 [2024-07-15 14:07:39.253018] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 125184 00:26:14.708 [2024-07-15 14:07:39.253028] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 126144 00:26:14.708 [2024-07-15 14:07:39.253039] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 125184 00:26:14.708 [2024-07-15 14:07:39.253052] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0077 00:26:14.708 [2024-07-15 14:07:39.253062] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:14.708 [2024-07-15 14:07:39.253081] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:14.708 [2024-07-15 14:07:39.253091] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:14.708 [2024-07-15 14:07:39.253101] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:14.708 [2024-07-15 14:07:39.253120] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:14.708 [2024-07-15 14:07:39.253132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.708 [2024-07-15 14:07:39.253147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:14.708 [2024-07-15 14:07:39.253158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.428 ms 00:26:14.708 [2024-07-15 14:07:39.253169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.019 [2024-07-15 14:07:39.270166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.019 [2024-07-15 14:07:39.270215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:15.019 [2024-07-15 14:07:39.270252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.948 ms 00:26:15.019 [2024-07-15 14:07:39.270265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.019 [2024-07-15 14:07:39.270751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.019 [2024-07-15 14:07:39.270786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:15.019 [2024-07-15 14:07:39.270802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.456 ms 00:26:15.019 [2024-07-15 14:07:39.270814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.019 [2024-07-15 14:07:39.308750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.019 [2024-07-15 14:07:39.308826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:15.019 [2024-07-15 14:07:39.308845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.019 [2024-07-15 14:07:39.308857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.019 [2024-07-15 14:07:39.308942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.019 [2024-07-15 14:07:39.308959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:15.019 [2024-07-15 14:07:39.308971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.019 [2024-07-15 14:07:39.308982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.019 [2024-07-15 14:07:39.309091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.019 [2024-07-15 14:07:39.309111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:15.019 [2024-07-15 14:07:39.309126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.019 [2024-07-15 14:07:39.309145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.019 [2024-07-15 14:07:39.309175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.019 [2024-07-15 14:07:39.309191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:15.019 [2024-07-15 14:07:39.309203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.019 [2024-07-15 14:07:39.309220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.019 [2024-07-15 14:07:39.409788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.019 [2024-07-15 14:07:39.409852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:15.019 [2024-07-15 14:07:39.409871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.019 [2024-07-15 14:07:39.409883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.019 [2024-07-15 14:07:39.495472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.019 [2024-07-15 14:07:39.495544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:15.019 [2024-07-15 14:07:39.495564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.019 [2024-07-15 14:07:39.495576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.019 [2024-07-15 14:07:39.495666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.019 [2024-07-15 14:07:39.495684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:15.019 [2024-07-15 14:07:39.495696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.019 [2024-07-15 14:07:39.495707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.019 [2024-07-15 14:07:39.495749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.019 [2024-07-15 14:07:39.495774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:15.019 [2024-07-15 14:07:39.495786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.020 [2024-07-15 14:07:39.495797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.020 [2024-07-15 14:07:39.495919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.020 [2024-07-15 14:07:39.495942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:15.020 [2024-07-15 14:07:39.495956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.020 [2024-07-15 14:07:39.495969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.020 [2024-07-15 14:07:39.496024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.020 [2024-07-15 14:07:39.496048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:15.020 [2024-07-15 14:07:39.496077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.020 [2024-07-15 14:07:39.496089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.020 [2024-07-15 14:07:39.496135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.020 [2024-07-15 14:07:39.496151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:15.020 [2024-07-15 14:07:39.496162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.020 [2024-07-15 14:07:39.496173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.020 [2024-07-15 14:07:39.496223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.020 [2024-07-15 14:07:39.496247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:15.020 [2024-07-15 14:07:39.496259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.020 [2024-07-15 14:07:39.496270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.020 [2024-07-15 14:07:39.496453] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 553.717 ms, result 0 00:26:16.918 00:26:16.918 00:26:16.918 14:07:41 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:26:16.918 [2024-07-15 14:07:41.197510] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:26:16.918 [2024-07-15 14:07:41.198353] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83168 ] 00:26:16.918 [2024-07-15 14:07:41.371334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.176 [2024-07-15 14:07:41.597516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.436 [2024-07-15 14:07:41.938833] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:17.436 [2024-07-15 14:07:41.938916] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:17.712 [2024-07-15 14:07:42.098403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.712 [2024-07-15 14:07:42.098485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:17.712 [2024-07-15 14:07:42.098508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:17.712 [2024-07-15 14:07:42.098521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.712 [2024-07-15 14:07:42.098596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.712 [2024-07-15 14:07:42.098618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:17.712 [2024-07-15 14:07:42.098631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:26:17.712 [2024-07-15 14:07:42.098646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.712 [2024-07-15 14:07:42.098677] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:17.712 [2024-07-15 14:07:42.099614] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:17.712 [2024-07-15 14:07:42.099652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.712 [2024-07-15 14:07:42.099671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:17.712 [2024-07-15 14:07:42.099684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.981 ms 00:26:17.712 [2024-07-15 14:07:42.099695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.712 [2024-07-15 14:07:42.100787] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:17.712 [2024-07-15 14:07:42.116939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.712 [2024-07-15 14:07:42.116987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:17.712 [2024-07-15 14:07:42.117008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.153 ms 00:26:17.712 [2024-07-15 14:07:42.117020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.712 [2024-07-15 14:07:42.117107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.712 [2024-07-15 14:07:42.117137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:17.712 [2024-07-15 14:07:42.117163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:26:17.712 [2024-07-15 14:07:42.117182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.712 [2024-07-15 14:07:42.121729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.712 [2024-07-15 14:07:42.121793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:17.712 [2024-07-15 14:07:42.121813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.440 ms 00:26:17.712 [2024-07-15 14:07:42.121825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.712 [2024-07-15 14:07:42.121935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.712 [2024-07-15 14:07:42.121958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:17.712 [2024-07-15 14:07:42.121971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:17.712 [2024-07-15 14:07:42.121983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.712 [2024-07-15 14:07:42.122055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.712 [2024-07-15 14:07:42.122074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:17.712 [2024-07-15 14:07:42.122086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:26:17.712 [2024-07-15 14:07:42.122102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.712 [2024-07-15 14:07:42.122137] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:17.712 [2024-07-15 14:07:42.126483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.712 [2024-07-15 14:07:42.126524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:17.712 [2024-07-15 14:07:42.126542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.355 ms 00:26:17.712 [2024-07-15 14:07:42.126553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.712 [2024-07-15 14:07:42.126600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.712 [2024-07-15 14:07:42.126616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:17.712 [2024-07-15 14:07:42.126628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:17.712 [2024-07-15 14:07:42.126639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.712 [2024-07-15 14:07:42.126685] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:17.712 [2024-07-15 14:07:42.126716] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:17.712 [2024-07-15 14:07:42.126760] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:17.712 [2024-07-15 14:07:42.126782] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:17.712 [2024-07-15 14:07:42.126889] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:17.712 [2024-07-15 14:07:42.126904] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:17.712 [2024-07-15 14:07:42.126918] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:17.712 [2024-07-15 14:07:42.126932] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:17.712 [2024-07-15 14:07:42.126945] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:17.712 [2024-07-15 14:07:42.126957] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:17.712 [2024-07-15 14:07:42.126968] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:17.712 [2024-07-15 14:07:42.126978] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:17.712 [2024-07-15 14:07:42.126989] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:17.712 [2024-07-15 14:07:42.127001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.712 [2024-07-15 14:07:42.127016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:17.712 [2024-07-15 14:07:42.127028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:26:17.712 [2024-07-15 14:07:42.127039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.712 [2024-07-15 14:07:42.127128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.712 [2024-07-15 14:07:42.127142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:17.712 [2024-07-15 14:07:42.127155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:26:17.712 [2024-07-15 14:07:42.127166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.712 [2024-07-15 14:07:42.127329] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:17.712 [2024-07-15 14:07:42.127351] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:17.712 [2024-07-15 14:07:42.127377] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:17.712 [2024-07-15 14:07:42.127389] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.712 [2024-07-15 14:07:42.127401] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:17.712 [2024-07-15 14:07:42.127411] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:17.712 [2024-07-15 14:07:42.127422] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:17.712 [2024-07-15 14:07:42.127433] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:17.712 [2024-07-15 14:07:42.127444] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:17.712 [2024-07-15 14:07:42.127454] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:17.712 [2024-07-15 14:07:42.127464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:17.712 [2024-07-15 14:07:42.127474] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:17.712 [2024-07-15 14:07:42.127484] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:17.712 [2024-07-15 14:07:42.127494] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:17.712 [2024-07-15 14:07:42.127504] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:17.712 [2024-07-15 14:07:42.127514] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.712 [2024-07-15 14:07:42.127524] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:17.712 [2024-07-15 14:07:42.127534] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:17.712 [2024-07-15 14:07:42.127545] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.712 [2024-07-15 14:07:42.127556] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:17.712 [2024-07-15 14:07:42.127578] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:17.712 [2024-07-15 14:07:42.127588] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:17.712 [2024-07-15 14:07:42.127599] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:17.712 [2024-07-15 14:07:42.127609] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:17.712 [2024-07-15 14:07:42.127619] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:17.712 [2024-07-15 14:07:42.127629] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:17.712 [2024-07-15 14:07:42.127639] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:17.712 [2024-07-15 14:07:42.127649] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:17.712 [2024-07-15 14:07:42.127659] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:17.712 [2024-07-15 14:07:42.127669] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:17.712 [2024-07-15 14:07:42.127679] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:17.712 [2024-07-15 14:07:42.127689] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:17.712 [2024-07-15 14:07:42.127699] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:17.712 [2024-07-15 14:07:42.127709] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:17.712 [2024-07-15 14:07:42.127719] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:17.712 [2024-07-15 14:07:42.127729] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:17.712 [2024-07-15 14:07:42.127739] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:17.712 [2024-07-15 14:07:42.127749] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:17.712 [2024-07-15 14:07:42.127759] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:17.712 [2024-07-15 14:07:42.127769] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.712 [2024-07-15 14:07:42.127779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:17.712 [2024-07-15 14:07:42.127789] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:17.712 [2024-07-15 14:07:42.127799] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.712 [2024-07-15 14:07:42.127809] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:17.712 [2024-07-15 14:07:42.127820] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:17.712 [2024-07-15 14:07:42.127831] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:17.712 [2024-07-15 14:07:42.127841] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.712 [2024-07-15 14:07:42.127852] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:17.712 [2024-07-15 14:07:42.127863] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:17.712 [2024-07-15 14:07:42.127873] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:17.713 [2024-07-15 14:07:42.127885] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:17.713 [2024-07-15 14:07:42.127895] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:17.713 [2024-07-15 14:07:42.127905] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:17.713 [2024-07-15 14:07:42.127917] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:17.713 [2024-07-15 14:07:42.127931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:17.713 [2024-07-15 14:07:42.127943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:17.713 [2024-07-15 14:07:42.127955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:17.713 [2024-07-15 14:07:42.127966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:17.713 [2024-07-15 14:07:42.127977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:17.713 [2024-07-15 14:07:42.127987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:17.713 [2024-07-15 14:07:42.127998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:17.713 [2024-07-15 14:07:42.128009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:17.713 [2024-07-15 14:07:42.128020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:17.713 [2024-07-15 14:07:42.128031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:17.713 [2024-07-15 14:07:42.128042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:17.713 [2024-07-15 14:07:42.128054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:17.713 [2024-07-15 14:07:42.128065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:17.713 [2024-07-15 14:07:42.128076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:17.713 [2024-07-15 14:07:42.128088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:17.713 [2024-07-15 14:07:42.128099] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:17.713 [2024-07-15 14:07:42.128111] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:17.713 [2024-07-15 14:07:42.128123] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:17.713 [2024-07-15 14:07:42.128135] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:17.713 [2024-07-15 14:07:42.128146] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:17.713 [2024-07-15 14:07:42.128157] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:17.713 [2024-07-15 14:07:42.128169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.713 [2024-07-15 14:07:42.128185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:17.713 [2024-07-15 14:07:42.128197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.934 ms 00:26:17.713 [2024-07-15 14:07:42.128208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.713 [2024-07-15 14:07:42.168883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.713 [2024-07-15 14:07:42.168954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:17.713 [2024-07-15 14:07:42.168980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.611 ms 00:26:17.713 [2024-07-15 14:07:42.169001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.713 [2024-07-15 14:07:42.169123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.713 [2024-07-15 14:07:42.169140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:17.713 [2024-07-15 14:07:42.169152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:26:17.713 [2024-07-15 14:07:42.169164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.713 [2024-07-15 14:07:42.208042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.713 [2024-07-15 14:07:42.208108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:17.713 [2024-07-15 14:07:42.208128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.786 ms 00:26:17.713 [2024-07-15 14:07:42.208141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.713 [2024-07-15 14:07:42.208211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.713 [2024-07-15 14:07:42.208229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:17.713 [2024-07-15 14:07:42.208241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:17.713 [2024-07-15 14:07:42.208252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.713 [2024-07-15 14:07:42.208661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.713 [2024-07-15 14:07:42.208681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:17.713 [2024-07-15 14:07:42.208694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:26:17.713 [2024-07-15 14:07:42.208705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.713 [2024-07-15 14:07:42.208872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.713 [2024-07-15 14:07:42.208897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:17.713 [2024-07-15 14:07:42.208910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:26:17.713 [2024-07-15 14:07:42.208921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.713 [2024-07-15 14:07:42.225141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.713 [2024-07-15 14:07:42.225192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:17.713 [2024-07-15 14:07:42.225211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.192 ms 00:26:17.713 [2024-07-15 14:07:42.225223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.713 [2024-07-15 14:07:42.241729] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:17.713 [2024-07-15 14:07:42.241780] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:17.713 [2024-07-15 14:07:42.241800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.713 [2024-07-15 14:07:42.241813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:17.713 [2024-07-15 14:07:42.241826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.404 ms 00:26:17.713 [2024-07-15 14:07:42.241837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.970 [2024-07-15 14:07:42.272730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.970 [2024-07-15 14:07:42.272815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:17.970 [2024-07-15 14:07:42.272837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.837 ms 00:26:17.970 [2024-07-15 14:07:42.272862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.970 [2024-07-15 14:07:42.289038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.970 [2024-07-15 14:07:42.289093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:17.970 [2024-07-15 14:07:42.289112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.079 ms 00:26:17.970 [2024-07-15 14:07:42.289124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.970 [2024-07-15 14:07:42.304552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.970 [2024-07-15 14:07:42.304607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:17.970 [2024-07-15 14:07:42.304626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.377 ms 00:26:17.970 [2024-07-15 14:07:42.304637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.970 [2024-07-15 14:07:42.305473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.970 [2024-07-15 14:07:42.305512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:17.970 [2024-07-15 14:07:42.305528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:26:17.970 [2024-07-15 14:07:42.305540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.970 [2024-07-15 14:07:42.378794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.971 [2024-07-15 14:07:42.378869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:17.971 [2024-07-15 14:07:42.378891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.228 ms 00:26:17.971 [2024-07-15 14:07:42.378903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.971 [2024-07-15 14:07:42.391617] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:17.971 [2024-07-15 14:07:42.394318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.971 [2024-07-15 14:07:42.394355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:17.971 [2024-07-15 14:07:42.394375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.330 ms 00:26:17.971 [2024-07-15 14:07:42.394386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.971 [2024-07-15 14:07:42.394515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.971 [2024-07-15 14:07:42.394536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:17.971 [2024-07-15 14:07:42.394550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:17.971 [2024-07-15 14:07:42.394561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.971 [2024-07-15 14:07:42.396149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.971 [2024-07-15 14:07:42.396192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:17.971 [2024-07-15 14:07:42.396209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.533 ms 00:26:17.971 [2024-07-15 14:07:42.396220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.971 [2024-07-15 14:07:42.396258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.971 [2024-07-15 14:07:42.396274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:17.971 [2024-07-15 14:07:42.396286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:17.971 [2024-07-15 14:07:42.396296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.971 [2024-07-15 14:07:42.396362] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:17.971 [2024-07-15 14:07:42.396380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.971 [2024-07-15 14:07:42.396391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:17.971 [2024-07-15 14:07:42.396408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:17.971 [2024-07-15 14:07:42.396418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.971 [2024-07-15 14:07:42.427470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.971 [2024-07-15 14:07:42.427535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:17.971 [2024-07-15 14:07:42.427556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.021 ms 00:26:17.971 [2024-07-15 14:07:42.427569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.971 [2024-07-15 14:07:42.427670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.971 [2024-07-15 14:07:42.427700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:17.971 [2024-07-15 14:07:42.427714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:26:17.971 [2024-07-15 14:07:42.427725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.971 [2024-07-15 14:07:42.435219] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 334.744 ms, result 0 00:26:55.731  Copying: 27/1024 [MB] (27 MBps) Copying: 54/1024 [MB] (27 MBps) Copying: 80/1024 [MB] (25 MBps) Copying: 105/1024 [MB] (25 MBps) Copying: 132/1024 [MB] (27 MBps) Copying: 160/1024 [MB] (28 MBps) Copying: 189/1024 [MB] (28 MBps) Copying: 219/1024 [MB] (29 MBps) Copying: 248/1024 [MB] (29 MBps) Copying: 274/1024 [MB] (25 MBps) Copying: 302/1024 [MB] (28 MBps) Copying: 329/1024 [MB] (27 MBps) Copying: 358/1024 [MB] (28 MBps) Copying: 386/1024 [MB] (28 MBps) Copying: 413/1024 [MB] (26 MBps) Copying: 440/1024 [MB] (27 MBps) Copying: 467/1024 [MB] (26 MBps) Copying: 497/1024 [MB] (29 MBps) Copying: 527/1024 [MB] (30 MBps) Copying: 555/1024 [MB] (28 MBps) Copying: 582/1024 [MB] (26 MBps) Copying: 609/1024 [MB] (27 MBps) Copying: 638/1024 [MB] (28 MBps) Copying: 668/1024 [MB] (30 MBps) Copying: 696/1024 [MB] (28 MBps) Copying: 725/1024 [MB] (28 MBps) Copying: 750/1024 [MB] (24 MBps) Copying: 777/1024 [MB] (26 MBps) Copying: 806/1024 [MB] (29 MBps) Copying: 834/1024 [MB] (28 MBps) Copying: 859/1024 [MB] (25 MBps) Copying: 889/1024 [MB] (29 MBps) Copying: 917/1024 [MB] (27 MBps) Copying: 945/1024 [MB] (28 MBps) Copying: 972/1024 [MB] (26 MBps) Copying: 1000/1024 [MB] (28 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-15 14:08:20.080886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.731 [2024-07-15 14:08:20.080977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:55.731 [2024-07-15 14:08:20.081001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:55.731 [2024-07-15 14:08:20.081014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.731 [2024-07-15 14:08:20.081046] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:55.731 [2024-07-15 14:08:20.085983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.731 [2024-07-15 14:08:20.086044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:55.731 [2024-07-15 14:08:20.086064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.910 ms 00:26:55.731 [2024-07-15 14:08:20.086076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.731 [2024-07-15 14:08:20.086373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.731 [2024-07-15 14:08:20.086404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:55.731 [2024-07-15 14:08:20.086428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:26:55.731 [2024-07-15 14:08:20.086447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.731 [2024-07-15 14:08:20.090806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.731 [2024-07-15 14:08:20.090869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:55.731 [2024-07-15 14:08:20.090900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.323 ms 00:26:55.731 [2024-07-15 14:08:20.090912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.731 [2024-07-15 14:08:20.098851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.731 [2024-07-15 14:08:20.098944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:55.731 [2024-07-15 14:08:20.098965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.875 ms 00:26:55.731 [2024-07-15 14:08:20.098977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.731 [2024-07-15 14:08:20.133792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.731 [2024-07-15 14:08:20.133871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:55.731 [2024-07-15 14:08:20.133893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.723 ms 00:26:55.731 [2024-07-15 14:08:20.133906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.731 [2024-07-15 14:08:20.152039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.731 [2024-07-15 14:08:20.152138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:55.731 [2024-07-15 14:08:20.152162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.042 ms 00:26:55.731 [2024-07-15 14:08:20.152189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.731 [2024-07-15 14:08:20.236418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.731 [2024-07-15 14:08:20.236534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:55.731 [2024-07-15 14:08:20.236559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.158 ms 00:26:55.731 [2024-07-15 14:08:20.236572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.731 [2024-07-15 14:08:20.270014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.731 [2024-07-15 14:08:20.270096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:55.731 [2024-07-15 14:08:20.270120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.413 ms 00:26:55.731 [2024-07-15 14:08:20.270132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.990 [2024-07-15 14:08:20.302591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.990 [2024-07-15 14:08:20.302668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:55.990 [2024-07-15 14:08:20.302690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.366 ms 00:26:55.990 [2024-07-15 14:08:20.302702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.990 [2024-07-15 14:08:20.335136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.990 [2024-07-15 14:08:20.335215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:55.990 [2024-07-15 14:08:20.335238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.354 ms 00:26:55.990 [2024-07-15 14:08:20.335272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.990 [2024-07-15 14:08:20.367003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.990 [2024-07-15 14:08:20.367087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:55.990 [2024-07-15 14:08:20.367109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.558 ms 00:26:55.990 [2024-07-15 14:08:20.367121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.990 [2024-07-15 14:08:20.367201] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:55.990 [2024-07-15 14:08:20.367227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:26:55.990 [2024-07-15 14:08:20.367243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:55.990 [2024-07-15 14:08:20.367255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:55.990 [2024-07-15 14:08:20.367267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:55.990 [2024-07-15 14:08:20.367279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:55.990 [2024-07-15 14:08:20.367291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:55.990 [2024-07-15 14:08:20.367325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:55.990 [2024-07-15 14:08:20.367341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:55.990 [2024-07-15 14:08:20.367354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:55.990 [2024-07-15 14:08:20.367366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:55.990 [2024-07-15 14:08:20.367378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:55.990 [2024-07-15 14:08:20.367390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:55.990 [2024-07-15 14:08:20.367401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:55.990 [2024-07-15 14:08:20.367413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:55.990 [2024-07-15 14:08:20.367425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:55.990 [2024-07-15 14:08:20.367436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:55.990 [2024-07-15 14:08:20.367448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:55.990 [2024-07-15 14:08:20.367459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:55.990 [2024-07-15 14:08:20.367471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:55.990 [2024-07-15 14:08:20.367483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:55.990 [2024-07-15 14:08:20.367494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:55.990 [2024-07-15 14:08:20.367505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:55.990 [2024-07-15 14:08:20.367517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:55.990 [2024-07-15 14:08:20.367529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:55.990 [2024-07-15 14:08:20.367541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.367994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:55.991 [2024-07-15 14:08:20.368441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:55.992 [2024-07-15 14:08:20.368463] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:55.992 [2024-07-15 14:08:20.368474] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 48aaaeb7-ff59-47dc-b2d2-8bf1c9ea6f7e 00:26:55.992 [2024-07-15 14:08:20.368486] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:26:55.992 [2024-07-15 14:08:20.368497] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 9408 00:26:55.992 [2024-07-15 14:08:20.368508] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 8448 00:26:55.992 [2024-07-15 14:08:20.368520] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.1136 00:26:55.992 [2024-07-15 14:08:20.368530] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:55.992 [2024-07-15 14:08:20.368550] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:55.992 [2024-07-15 14:08:20.368561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:55.992 [2024-07-15 14:08:20.368571] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:55.992 [2024-07-15 14:08:20.368580] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:55.992 [2024-07-15 14:08:20.368591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.992 [2024-07-15 14:08:20.368604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:55.992 [2024-07-15 14:08:20.368620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.393 ms 00:26:55.992 [2024-07-15 14:08:20.368630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.992 [2024-07-15 14:08:20.385402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.992 [2024-07-15 14:08:20.385476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:55.992 [2024-07-15 14:08:20.385496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.708 ms 00:26:55.992 [2024-07-15 14:08:20.385529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.992 [2024-07-15 14:08:20.385990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.992 [2024-07-15 14:08:20.386013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:55.992 [2024-07-15 14:08:20.386027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:26:55.992 [2024-07-15 14:08:20.386038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.992 [2024-07-15 14:08:20.423145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.992 [2024-07-15 14:08:20.423221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:55.992 [2024-07-15 14:08:20.423242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.992 [2024-07-15 14:08:20.423254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.992 [2024-07-15 14:08:20.423362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.992 [2024-07-15 14:08:20.423381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:55.992 [2024-07-15 14:08:20.423394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.992 [2024-07-15 14:08:20.423405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.992 [2024-07-15 14:08:20.423503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.992 [2024-07-15 14:08:20.423523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:55.992 [2024-07-15 14:08:20.423535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.992 [2024-07-15 14:08:20.423546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.992 [2024-07-15 14:08:20.423574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.992 [2024-07-15 14:08:20.423589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:55.992 [2024-07-15 14:08:20.423600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.992 [2024-07-15 14:08:20.423611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.992 [2024-07-15 14:08:20.524105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.992 [2024-07-15 14:08:20.524176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:55.992 [2024-07-15 14:08:20.524198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.992 [2024-07-15 14:08:20.524210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.251 [2024-07-15 14:08:20.610372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:56.251 [2024-07-15 14:08:20.610450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:56.251 [2024-07-15 14:08:20.610472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:56.251 [2024-07-15 14:08:20.610484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.251 [2024-07-15 14:08:20.610577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:56.251 [2024-07-15 14:08:20.610596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:56.251 [2024-07-15 14:08:20.610608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:56.251 [2024-07-15 14:08:20.610620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.251 [2024-07-15 14:08:20.610663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:56.251 [2024-07-15 14:08:20.610677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:56.251 [2024-07-15 14:08:20.610701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:56.251 [2024-07-15 14:08:20.610712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.251 [2024-07-15 14:08:20.610842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:56.251 [2024-07-15 14:08:20.610863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:56.251 [2024-07-15 14:08:20.610876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:56.251 [2024-07-15 14:08:20.610886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.251 [2024-07-15 14:08:20.610936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:56.251 [2024-07-15 14:08:20.610954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:56.251 [2024-07-15 14:08:20.610971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:56.251 [2024-07-15 14:08:20.610982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.251 [2024-07-15 14:08:20.611026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:56.251 [2024-07-15 14:08:20.611040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:56.251 [2024-07-15 14:08:20.611052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:56.251 [2024-07-15 14:08:20.611062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.251 [2024-07-15 14:08:20.611114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:56.251 [2024-07-15 14:08:20.611131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:56.251 [2024-07-15 14:08:20.611148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:56.251 [2024-07-15 14:08:20.611159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.251 [2024-07-15 14:08:20.611296] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 530.383 ms, result 0 00:26:57.186 00:26:57.186 00:26:57.186 14:08:21 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:59.730 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:59.730 14:08:23 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:26:59.730 14:08:23 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:26:59.730 14:08:23 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:59.730 14:08:24 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:59.730 14:08:24 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:59.730 Process with pid 81715 is not found 00:26:59.730 Remove shared memory files 00:26:59.730 14:08:24 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 81715 00:26:59.730 14:08:24 ftl.ftl_restore -- common/autotest_common.sh@948 -- # '[' -z 81715 ']' 00:26:59.730 14:08:24 ftl.ftl_restore -- common/autotest_common.sh@952 -- # kill -0 81715 00:26:59.730 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (81715) - No such process 00:26:59.730 14:08:24 ftl.ftl_restore -- common/autotest_common.sh@975 -- # echo 'Process with pid 81715 is not found' 00:26:59.730 14:08:24 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:26:59.730 14:08:24 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:59.730 14:08:24 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:26:59.730 14:08:24 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:26:59.730 14:08:24 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:26:59.730 14:08:24 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:59.730 14:08:24 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:26:59.730 ************************************ 00:26:59.730 END TEST ftl_restore 00:26:59.730 ************************************ 00:26:59.730 00:26:59.730 real 3m6.614s 00:26:59.730 user 2m51.775s 00:26:59.730 sys 0m16.668s 00:26:59.730 14:08:24 ftl.ftl_restore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:59.730 14:08:24 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:26:59.730 14:08:24 ftl -- common/autotest_common.sh@1142 -- # return 0 00:26:59.730 14:08:24 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:26:59.730 14:08:24 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:26:59.730 14:08:24 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:59.730 14:08:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:59.730 ************************************ 00:26:59.730 START TEST ftl_dirty_shutdown 00:26:59.730 ************************************ 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:26:59.730 * Looking for test storage... 00:26:59.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=83640 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 83640 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@829 -- # '[' -z 83640 ']' 00:26:59.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:59.730 14:08:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:59.988 [2024-07-15 14:08:24.407625] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:26:59.988 [2024-07-15 14:08:24.407935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83640 ] 00:27:00.246 [2024-07-15 14:08:24.579937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.246 [2024-07-15 14:08:24.767424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.242 14:08:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:01.242 14:08:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # return 0 00:27:01.242 14:08:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:01.242 14:08:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:27:01.242 14:08:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:01.242 14:08:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:27:01.242 14:08:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:01.242 14:08:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:01.516 14:08:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:01.516 14:08:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:01.516 14:08:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:01.516 14:08:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:27:01.516 14:08:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:01.516 14:08:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:01.516 14:08:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:01.516 14:08:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:01.775 14:08:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:01.775 { 00:27:01.775 "name": "nvme0n1", 00:27:01.775 "aliases": [ 00:27:01.775 "75109ddd-1206-448c-a702-79eb24e4f5dd" 00:27:01.775 ], 00:27:01.775 "product_name": "NVMe disk", 00:27:01.775 "block_size": 4096, 00:27:01.775 "num_blocks": 1310720, 00:27:01.775 "uuid": "75109ddd-1206-448c-a702-79eb24e4f5dd", 00:27:01.775 "assigned_rate_limits": { 00:27:01.775 "rw_ios_per_sec": 0, 00:27:01.775 "rw_mbytes_per_sec": 0, 00:27:01.775 "r_mbytes_per_sec": 0, 00:27:01.775 "w_mbytes_per_sec": 0 00:27:01.775 }, 00:27:01.775 "claimed": true, 00:27:01.775 "claim_type": "read_many_write_one", 00:27:01.775 "zoned": false, 00:27:01.775 "supported_io_types": { 00:27:01.775 "read": true, 00:27:01.775 "write": true, 00:27:01.775 "unmap": true, 00:27:01.775 "flush": true, 00:27:01.775 "reset": true, 00:27:01.775 "nvme_admin": true, 00:27:01.775 "nvme_io": true, 00:27:01.775 "nvme_io_md": false, 00:27:01.775 "write_zeroes": true, 00:27:01.775 "zcopy": false, 00:27:01.775 "get_zone_info": false, 00:27:01.775 "zone_management": false, 00:27:01.775 "zone_append": false, 00:27:01.775 "compare": true, 00:27:01.775 "compare_and_write": false, 00:27:01.775 "abort": true, 00:27:01.775 "seek_hole": false, 00:27:01.775 "seek_data": false, 00:27:01.775 "copy": true, 00:27:01.775 "nvme_iov_md": false 00:27:01.775 }, 00:27:01.775 "driver_specific": { 00:27:01.775 "nvme": [ 00:27:01.775 { 00:27:01.775 "pci_address": "0000:00:11.0", 00:27:01.775 "trid": { 00:27:01.775 "trtype": "PCIe", 00:27:01.775 "traddr": "0000:00:11.0" 00:27:01.775 }, 00:27:01.775 "ctrlr_data": { 00:27:01.775 "cntlid": 0, 00:27:01.775 "vendor_id": "0x1b36", 00:27:01.775 "model_number": "QEMU NVMe Ctrl", 00:27:01.775 "serial_number": "12341", 00:27:01.775 "firmware_revision": "8.0.0", 00:27:01.775 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:01.775 "oacs": { 00:27:01.775 "security": 0, 00:27:01.775 "format": 1, 00:27:01.775 "firmware": 0, 00:27:01.775 "ns_manage": 1 00:27:01.775 }, 00:27:01.775 "multi_ctrlr": false, 00:27:01.775 "ana_reporting": false 00:27:01.775 }, 00:27:01.775 "vs": { 00:27:01.775 "nvme_version": "1.4" 00:27:01.775 }, 00:27:01.775 "ns_data": { 00:27:01.775 "id": 1, 00:27:01.775 "can_share": false 00:27:01.775 } 00:27:01.775 } 00:27:01.775 ], 00:27:01.775 "mp_policy": "active_passive" 00:27:01.775 } 00:27:01.775 } 00:27:01.775 ]' 00:27:01.775 14:08:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:01.775 14:08:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:01.775 14:08:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:01.775 14:08:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:27:01.775 14:08:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:27:01.775 14:08:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:27:01.775 14:08:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:01.775 14:08:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:01.775 14:08:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:01.775 14:08:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:01.775 14:08:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:02.342 14:08:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=deecc4d3-243f-411f-857b-4ad8d16326d1 00:27:02.342 14:08:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:02.342 14:08:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u deecc4d3-243f-411f-857b-4ad8d16326d1 00:27:02.601 14:08:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:02.859 14:08:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=58c23042-148c-4509-8c7b-00059463fce8 00:27:02.859 14:08:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 58c23042-148c-4509-8c7b-00059463fce8 00:27:03.118 14:08:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=b3642a77-3e0a-458d-a241-a0d6a8ccbb4d 00:27:03.118 14:08:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:27:03.118 14:08:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b3642a77-3e0a-458d-a241-a0d6a8ccbb4d 00:27:03.118 14:08:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:27:03.118 14:08:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:03.118 14:08:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=b3642a77-3e0a-458d-a241-a0d6a8ccbb4d 00:27:03.118 14:08:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:27:03.118 14:08:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size b3642a77-3e0a-458d-a241-a0d6a8ccbb4d 00:27:03.118 14:08:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=b3642a77-3e0a-458d-a241-a0d6a8ccbb4d 00:27:03.118 14:08:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:03.118 14:08:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:03.118 14:08:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:03.118 14:08:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b3642a77-3e0a-458d-a241-a0d6a8ccbb4d 00:27:03.377 14:08:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:03.377 { 00:27:03.377 "name": "b3642a77-3e0a-458d-a241-a0d6a8ccbb4d", 00:27:03.377 "aliases": [ 00:27:03.377 "lvs/nvme0n1p0" 00:27:03.377 ], 00:27:03.377 "product_name": "Logical Volume", 00:27:03.377 "block_size": 4096, 00:27:03.377 "num_blocks": 26476544, 00:27:03.377 "uuid": "b3642a77-3e0a-458d-a241-a0d6a8ccbb4d", 00:27:03.377 "assigned_rate_limits": { 00:27:03.377 "rw_ios_per_sec": 0, 00:27:03.377 "rw_mbytes_per_sec": 0, 00:27:03.377 "r_mbytes_per_sec": 0, 00:27:03.377 "w_mbytes_per_sec": 0 00:27:03.377 }, 00:27:03.377 "claimed": false, 00:27:03.377 "zoned": false, 00:27:03.377 "supported_io_types": { 00:27:03.377 "read": true, 00:27:03.377 "write": true, 00:27:03.377 "unmap": true, 00:27:03.377 "flush": false, 00:27:03.377 "reset": true, 00:27:03.377 "nvme_admin": false, 00:27:03.377 "nvme_io": false, 00:27:03.377 "nvme_io_md": false, 00:27:03.377 "write_zeroes": true, 00:27:03.377 "zcopy": false, 00:27:03.377 "get_zone_info": false, 00:27:03.377 "zone_management": false, 00:27:03.377 "zone_append": false, 00:27:03.377 "compare": false, 00:27:03.377 "compare_and_write": false, 00:27:03.377 "abort": false, 00:27:03.377 "seek_hole": true, 00:27:03.377 "seek_data": true, 00:27:03.377 "copy": false, 00:27:03.377 "nvme_iov_md": false 00:27:03.377 }, 00:27:03.377 "driver_specific": { 00:27:03.377 "lvol": { 00:27:03.377 "lvol_store_uuid": "58c23042-148c-4509-8c7b-00059463fce8", 00:27:03.377 "base_bdev": "nvme0n1", 00:27:03.377 "thin_provision": true, 00:27:03.377 "num_allocated_clusters": 0, 00:27:03.377 "snapshot": false, 00:27:03.377 "clone": false, 00:27:03.377 "esnap_clone": false 00:27:03.377 } 00:27:03.377 } 00:27:03.377 } 00:27:03.377 ]' 00:27:03.377 14:08:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:03.377 14:08:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:03.377 14:08:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:03.377 14:08:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:03.377 14:08:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:03.377 14:08:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:27:03.377 14:08:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:27:03.377 14:08:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:03.377 14:08:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:03.943 14:08:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:03.943 14:08:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:03.943 14:08:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size b3642a77-3e0a-458d-a241-a0d6a8ccbb4d 00:27:03.943 14:08:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=b3642a77-3e0a-458d-a241-a0d6a8ccbb4d 00:27:03.943 14:08:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:03.943 14:08:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:03.943 14:08:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:03.943 14:08:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b3642a77-3e0a-458d-a241-a0d6a8ccbb4d 00:27:04.202 14:08:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:04.202 { 00:27:04.202 "name": "b3642a77-3e0a-458d-a241-a0d6a8ccbb4d", 00:27:04.202 "aliases": [ 00:27:04.202 "lvs/nvme0n1p0" 00:27:04.202 ], 00:27:04.202 "product_name": "Logical Volume", 00:27:04.202 "block_size": 4096, 00:27:04.202 "num_blocks": 26476544, 00:27:04.202 "uuid": "b3642a77-3e0a-458d-a241-a0d6a8ccbb4d", 00:27:04.202 "assigned_rate_limits": { 00:27:04.202 "rw_ios_per_sec": 0, 00:27:04.202 "rw_mbytes_per_sec": 0, 00:27:04.202 "r_mbytes_per_sec": 0, 00:27:04.202 "w_mbytes_per_sec": 0 00:27:04.202 }, 00:27:04.202 "claimed": false, 00:27:04.202 "zoned": false, 00:27:04.202 "supported_io_types": { 00:27:04.202 "read": true, 00:27:04.202 "write": true, 00:27:04.202 "unmap": true, 00:27:04.202 "flush": false, 00:27:04.202 "reset": true, 00:27:04.202 "nvme_admin": false, 00:27:04.202 "nvme_io": false, 00:27:04.202 "nvme_io_md": false, 00:27:04.202 "write_zeroes": true, 00:27:04.202 "zcopy": false, 00:27:04.202 "get_zone_info": false, 00:27:04.202 "zone_management": false, 00:27:04.202 "zone_append": false, 00:27:04.202 "compare": false, 00:27:04.202 "compare_and_write": false, 00:27:04.202 "abort": false, 00:27:04.202 "seek_hole": true, 00:27:04.202 "seek_data": true, 00:27:04.202 "copy": false, 00:27:04.202 "nvme_iov_md": false 00:27:04.202 }, 00:27:04.202 "driver_specific": { 00:27:04.202 "lvol": { 00:27:04.202 "lvol_store_uuid": "58c23042-148c-4509-8c7b-00059463fce8", 00:27:04.202 "base_bdev": "nvme0n1", 00:27:04.202 "thin_provision": true, 00:27:04.202 "num_allocated_clusters": 0, 00:27:04.202 "snapshot": false, 00:27:04.202 "clone": false, 00:27:04.202 "esnap_clone": false 00:27:04.202 } 00:27:04.202 } 00:27:04.202 } 00:27:04.202 ]' 00:27:04.202 14:08:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:04.202 14:08:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:04.202 14:08:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:04.202 14:08:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:04.202 14:08:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:04.202 14:08:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:27:04.202 14:08:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:27:04.202 14:08:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:04.461 14:08:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:27:04.461 14:08:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size b3642a77-3e0a-458d-a241-a0d6a8ccbb4d 00:27:04.461 14:08:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=b3642a77-3e0a-458d-a241-a0d6a8ccbb4d 00:27:04.461 14:08:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:04.461 14:08:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:04.461 14:08:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:04.461 14:08:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b3642a77-3e0a-458d-a241-a0d6a8ccbb4d 00:27:04.720 14:08:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:04.720 { 00:27:04.720 "name": "b3642a77-3e0a-458d-a241-a0d6a8ccbb4d", 00:27:04.720 "aliases": [ 00:27:04.720 "lvs/nvme0n1p0" 00:27:04.720 ], 00:27:04.720 "product_name": "Logical Volume", 00:27:04.720 "block_size": 4096, 00:27:04.720 "num_blocks": 26476544, 00:27:04.720 "uuid": "b3642a77-3e0a-458d-a241-a0d6a8ccbb4d", 00:27:04.720 "assigned_rate_limits": { 00:27:04.720 "rw_ios_per_sec": 0, 00:27:04.720 "rw_mbytes_per_sec": 0, 00:27:04.720 "r_mbytes_per_sec": 0, 00:27:04.720 "w_mbytes_per_sec": 0 00:27:04.720 }, 00:27:04.720 "claimed": false, 00:27:04.720 "zoned": false, 00:27:04.720 "supported_io_types": { 00:27:04.720 "read": true, 00:27:04.720 "write": true, 00:27:04.720 "unmap": true, 00:27:04.720 "flush": false, 00:27:04.720 "reset": true, 00:27:04.720 "nvme_admin": false, 00:27:04.720 "nvme_io": false, 00:27:04.720 "nvme_io_md": false, 00:27:04.720 "write_zeroes": true, 00:27:04.720 "zcopy": false, 00:27:04.720 "get_zone_info": false, 00:27:04.720 "zone_management": false, 00:27:04.720 "zone_append": false, 00:27:04.720 "compare": false, 00:27:04.720 "compare_and_write": false, 00:27:04.720 "abort": false, 00:27:04.720 "seek_hole": true, 00:27:04.720 "seek_data": true, 00:27:04.720 "copy": false, 00:27:04.720 "nvme_iov_md": false 00:27:04.720 }, 00:27:04.720 "driver_specific": { 00:27:04.720 "lvol": { 00:27:04.720 "lvol_store_uuid": "58c23042-148c-4509-8c7b-00059463fce8", 00:27:04.720 "base_bdev": "nvme0n1", 00:27:04.720 "thin_provision": true, 00:27:04.720 "num_allocated_clusters": 0, 00:27:04.720 "snapshot": false, 00:27:04.720 "clone": false, 00:27:04.720 "esnap_clone": false 00:27:04.720 } 00:27:04.720 } 00:27:04.720 } 00:27:04.720 ]' 00:27:04.720 14:08:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:04.720 14:08:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:04.720 14:08:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:04.978 14:08:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:04.978 14:08:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:04.978 14:08:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:27:04.978 14:08:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:27:04.978 14:08:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d b3642a77-3e0a-458d-a241-a0d6a8ccbb4d --l2p_dram_limit 10' 00:27:04.978 14:08:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:27:04.978 14:08:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:27:04.978 14:08:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:27:04.978 14:08:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b3642a77-3e0a-458d-a241-a0d6a8ccbb4d --l2p_dram_limit 10 -c nvc0n1p0 00:27:05.237 [2024-07-15 14:08:29.526851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.237 [2024-07-15 14:08:29.526927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:05.237 [2024-07-15 14:08:29.526951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:05.237 [2024-07-15 14:08:29.526966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.237 [2024-07-15 14:08:29.527078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.237 [2024-07-15 14:08:29.527114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:05.237 [2024-07-15 14:08:29.527140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:27:05.237 [2024-07-15 14:08:29.527160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.237 [2024-07-15 14:08:29.527196] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:05.237 [2024-07-15 14:08:29.528280] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:05.237 [2024-07-15 14:08:29.528345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.237 [2024-07-15 14:08:29.528371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:05.237 [2024-07-15 14:08:29.528385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.156 ms 00:27:05.237 [2024-07-15 14:08:29.528399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.237 [2024-07-15 14:08:29.528529] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f654a337-809f-45b2-9ca4-a55998feb384 00:27:05.237 [2024-07-15 14:08:29.529691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.237 [2024-07-15 14:08:29.529735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:05.237 [2024-07-15 14:08:29.529756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:27:05.237 [2024-07-15 14:08:29.529769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.237 [2024-07-15 14:08:29.534842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.237 [2024-07-15 14:08:29.534920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:05.237 [2024-07-15 14:08:29.534962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.991 ms 00:27:05.237 [2024-07-15 14:08:29.534976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.237 [2024-07-15 14:08:29.535118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.237 [2024-07-15 14:08:29.535140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:05.237 [2024-07-15 14:08:29.535156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:27:05.237 [2024-07-15 14:08:29.535168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.237 [2024-07-15 14:08:29.535273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.237 [2024-07-15 14:08:29.535338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:05.237 [2024-07-15 14:08:29.535370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:27:05.237 [2024-07-15 14:08:29.535396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.237 [2024-07-15 14:08:29.535452] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:05.237 [2024-07-15 14:08:29.540424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.237 [2024-07-15 14:08:29.540472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:05.237 [2024-07-15 14:08:29.540498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.989 ms 00:27:05.237 [2024-07-15 14:08:29.540528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.237 [2024-07-15 14:08:29.540600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.237 [2024-07-15 14:08:29.540623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:05.237 [2024-07-15 14:08:29.540637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:05.237 [2024-07-15 14:08:29.540650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.237 [2024-07-15 14:08:29.540718] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:05.237 [2024-07-15 14:08:29.540897] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:05.237 [2024-07-15 14:08:29.540923] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:05.237 [2024-07-15 14:08:29.540957] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:05.237 [2024-07-15 14:08:29.540979] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:05.237 [2024-07-15 14:08:29.540996] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:05.237 [2024-07-15 14:08:29.541011] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:05.237 [2024-07-15 14:08:29.541034] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:05.237 [2024-07-15 14:08:29.541059] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:05.237 [2024-07-15 14:08:29.541085] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:05.237 [2024-07-15 14:08:29.541103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.237 [2024-07-15 14:08:29.541118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:05.237 [2024-07-15 14:08:29.541131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:27:05.237 [2024-07-15 14:08:29.541145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.237 [2024-07-15 14:08:29.541240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.237 [2024-07-15 14:08:29.541257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:05.237 [2024-07-15 14:08:29.541270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:27:05.237 [2024-07-15 14:08:29.541288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.237 [2024-07-15 14:08:29.541432] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:05.237 [2024-07-15 14:08:29.541472] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:05.237 [2024-07-15 14:08:29.541509] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:05.238 [2024-07-15 14:08:29.541526] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:05.238 [2024-07-15 14:08:29.541539] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:05.238 [2024-07-15 14:08:29.541552] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:05.238 [2024-07-15 14:08:29.541564] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:05.238 [2024-07-15 14:08:29.541577] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:05.238 [2024-07-15 14:08:29.541588] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:05.238 [2024-07-15 14:08:29.541600] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:05.238 [2024-07-15 14:08:29.541611] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:05.238 [2024-07-15 14:08:29.541625] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:05.238 [2024-07-15 14:08:29.541635] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:05.238 [2024-07-15 14:08:29.541650] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:05.238 [2024-07-15 14:08:29.541662] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:05.238 [2024-07-15 14:08:29.541674] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:05.238 [2024-07-15 14:08:29.541685] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:05.238 [2024-07-15 14:08:29.541703] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:05.238 [2024-07-15 14:08:29.541714] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:05.238 [2024-07-15 14:08:29.541728] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:05.238 [2024-07-15 14:08:29.541739] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:05.238 [2024-07-15 14:08:29.541751] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:05.238 [2024-07-15 14:08:29.541762] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:05.238 [2024-07-15 14:08:29.541775] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:05.238 [2024-07-15 14:08:29.541785] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:05.238 [2024-07-15 14:08:29.541808] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:05.238 [2024-07-15 14:08:29.541826] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:05.238 [2024-07-15 14:08:29.541850] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:05.238 [2024-07-15 14:08:29.541866] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:05.238 [2024-07-15 14:08:29.541880] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:05.238 [2024-07-15 14:08:29.541896] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:05.238 [2024-07-15 14:08:29.541920] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:05.238 [2024-07-15 14:08:29.541942] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:05.238 [2024-07-15 14:08:29.541969] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:05.238 [2024-07-15 14:08:29.541983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:05.238 [2024-07-15 14:08:29.541996] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:05.238 [2024-07-15 14:08:29.542007] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:05.238 [2024-07-15 14:08:29.542019] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:05.238 [2024-07-15 14:08:29.542030] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:05.238 [2024-07-15 14:08:29.542045] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:05.238 [2024-07-15 14:08:29.542056] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:05.238 [2024-07-15 14:08:29.542071] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:05.238 [2024-07-15 14:08:29.542081] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:05.238 [2024-07-15 14:08:29.542095] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:05.238 [2024-07-15 14:08:29.542107] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:05.238 [2024-07-15 14:08:29.542122] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:05.238 [2024-07-15 14:08:29.542133] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:05.238 [2024-07-15 14:08:29.542147] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:05.238 [2024-07-15 14:08:29.542159] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:05.238 [2024-07-15 14:08:29.542174] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:05.238 [2024-07-15 14:08:29.542186] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:05.238 [2024-07-15 14:08:29.542199] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:05.238 [2024-07-15 14:08:29.542209] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:05.238 [2024-07-15 14:08:29.542227] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:05.238 [2024-07-15 14:08:29.542241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:05.238 [2024-07-15 14:08:29.542260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:05.238 [2024-07-15 14:08:29.542277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:05.238 [2024-07-15 14:08:29.542322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:05.238 [2024-07-15 14:08:29.542343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:05.238 [2024-07-15 14:08:29.542359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:05.238 [2024-07-15 14:08:29.542375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:05.238 [2024-07-15 14:08:29.542400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:05.238 [2024-07-15 14:08:29.542417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:05.238 [2024-07-15 14:08:29.542433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:05.238 [2024-07-15 14:08:29.542446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:05.238 [2024-07-15 14:08:29.542461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:05.238 [2024-07-15 14:08:29.542473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:05.238 [2024-07-15 14:08:29.542487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:05.238 [2024-07-15 14:08:29.542513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:05.238 [2024-07-15 14:08:29.542530] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:05.238 [2024-07-15 14:08:29.542543] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:05.238 [2024-07-15 14:08:29.542559] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:05.238 [2024-07-15 14:08:29.542571] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:05.238 [2024-07-15 14:08:29.542585] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:05.238 [2024-07-15 14:08:29.542597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:05.238 [2024-07-15 14:08:29.542613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.238 [2024-07-15 14:08:29.542635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:05.238 [2024-07-15 14:08:29.542651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.254 ms 00:27:05.238 [2024-07-15 14:08:29.542663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.238 [2024-07-15 14:08:29.542722] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:05.238 [2024-07-15 14:08:29.542740] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:07.139 [2024-07-15 14:08:31.549695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.139 [2024-07-15 14:08:31.549781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:07.139 [2024-07-15 14:08:31.549807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2006.973 ms 00:27:07.139 [2024-07-15 14:08:31.549821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.139 [2024-07-15 14:08:31.582449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.139 [2024-07-15 14:08:31.582526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:07.139 [2024-07-15 14:08:31.582552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.352 ms 00:27:07.139 [2024-07-15 14:08:31.582566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.139 [2024-07-15 14:08:31.582782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.139 [2024-07-15 14:08:31.582809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:07.139 [2024-07-15 14:08:31.582826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:27:07.139 [2024-07-15 14:08:31.582842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.139 [2024-07-15 14:08:31.621666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.139 [2024-07-15 14:08:31.621731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:07.139 [2024-07-15 14:08:31.621755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.755 ms 00:27:07.139 [2024-07-15 14:08:31.621768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.139 [2024-07-15 14:08:31.621835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.139 [2024-07-15 14:08:31.621859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:07.139 [2024-07-15 14:08:31.621875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:07.139 [2024-07-15 14:08:31.621887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.139 [2024-07-15 14:08:31.622257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.139 [2024-07-15 14:08:31.622276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:07.139 [2024-07-15 14:08:31.622292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:27:07.139 [2024-07-15 14:08:31.622327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.139 [2024-07-15 14:08:31.622482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.139 [2024-07-15 14:08:31.622512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:07.139 [2024-07-15 14:08:31.622533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:27:07.139 [2024-07-15 14:08:31.622545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.139 [2024-07-15 14:08:31.639750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.139 [2024-07-15 14:08:31.639816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:07.139 [2024-07-15 14:08:31.639840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.172 ms 00:27:07.139 [2024-07-15 14:08:31.639853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.139 [2024-07-15 14:08:31.653703] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:07.139 [2024-07-15 14:08:31.656398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.139 [2024-07-15 14:08:31.656441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:07.139 [2024-07-15 14:08:31.656462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.411 ms 00:27:07.139 [2024-07-15 14:08:31.656477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.398 [2024-07-15 14:08:31.733003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.398 [2024-07-15 14:08:31.733090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:07.398 [2024-07-15 14:08:31.733114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.468 ms 00:27:07.398 [2024-07-15 14:08:31.733128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.398 [2024-07-15 14:08:31.733420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.398 [2024-07-15 14:08:31.733460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:07.398 [2024-07-15 14:08:31.733476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:27:07.398 [2024-07-15 14:08:31.733495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.398 [2024-07-15 14:08:31.766031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.398 [2024-07-15 14:08:31.766111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:07.398 [2024-07-15 14:08:31.766134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.440 ms 00:27:07.398 [2024-07-15 14:08:31.766149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.398 [2024-07-15 14:08:31.797537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.398 [2024-07-15 14:08:31.797641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:07.398 [2024-07-15 14:08:31.797665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.310 ms 00:27:07.398 [2024-07-15 14:08:31.797680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.398 [2024-07-15 14:08:31.798461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.398 [2024-07-15 14:08:31.798514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:07.398 [2024-07-15 14:08:31.798533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:27:07.398 [2024-07-15 14:08:31.798552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.398 [2024-07-15 14:08:31.892792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.398 [2024-07-15 14:08:31.892902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:07.398 [2024-07-15 14:08:31.892925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.140 ms 00:27:07.398 [2024-07-15 14:08:31.892944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.398 [2024-07-15 14:08:31.926994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.398 [2024-07-15 14:08:31.927076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:07.398 [2024-07-15 14:08:31.927099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.955 ms 00:27:07.398 [2024-07-15 14:08:31.927114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.666 [2024-07-15 14:08:31.959185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.666 [2024-07-15 14:08:31.959267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:07.666 [2024-07-15 14:08:31.959289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.997 ms 00:27:07.666 [2024-07-15 14:08:31.959320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.666 [2024-07-15 14:08:31.991340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.666 [2024-07-15 14:08:31.991438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:07.666 [2024-07-15 14:08:31.991460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.937 ms 00:27:07.666 [2024-07-15 14:08:31.991476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.666 [2024-07-15 14:08:31.991580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.666 [2024-07-15 14:08:31.991605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:07.666 [2024-07-15 14:08:31.991620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:27:07.666 [2024-07-15 14:08:31.991638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.666 [2024-07-15 14:08:31.991779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.666 [2024-07-15 14:08:31.991804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:07.666 [2024-07-15 14:08:31.991822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:27:07.666 [2024-07-15 14:08:31.991835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.666 [2024-07-15 14:08:31.992956] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2465.598 ms, result 0 00:27:07.666 { 00:27:07.666 "name": "ftl0", 00:27:07.666 "uuid": "f654a337-809f-45b2-9ca4-a55998feb384" 00:27:07.666 } 00:27:07.666 14:08:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:27:07.666 14:08:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:07.924 14:08:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:27:07.924 14:08:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:27:07.924 14:08:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:27:08.183 /dev/nbd0 00:27:08.183 14:08:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:27:08.183 14:08:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:08.183 14:08:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # local i 00:27:08.183 14:08:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:08.183 14:08:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:08.183 14:08:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:08.183 14:08:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # break 00:27:08.183 14:08:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:08.183 14:08:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:08.183 14:08:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:27:08.183 1+0 records in 00:27:08.183 1+0 records out 00:27:08.183 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000720013 s, 5.7 MB/s 00:27:08.183 14:08:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:27:08.183 14:08:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # size=4096 00:27:08.183 14:08:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:27:08.183 14:08:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:08.183 14:08:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # return 0 00:27:08.183 14:08:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:27:08.183 [2024-07-15 14:08:32.693462] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:27:08.183 [2024-07-15 14:08:32.694368] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83784 ] 00:27:08.441 [2024-07-15 14:08:32.869273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.699 [2024-07-15 14:08:33.085239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.604  Copying: 167/1024 [MB] (167 MBps) Copying: 337/1024 [MB] (169 MBps) Copying: 506/1024 [MB] (169 MBps) Copying: 675/1024 [MB] (168 MBps) Copying: 835/1024 [MB] (159 MBps) Copying: 986/1024 [MB] (151 MBps) Copying: 1024/1024 [MB] (average 163 MBps) 00:27:16.604 00:27:16.604 14:08:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:19.131 14:08:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:27:19.131 [2024-07-15 14:08:43.351319] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:27:19.131 [2024-07-15 14:08:43.351470] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83894 ] 00:27:19.131 [2024-07-15 14:08:43.513000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.389 [2024-07-15 14:08:43.742929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.320  Copying: 17/1024 [MB] (17 MBps) Copying: 34/1024 [MB] (16 MBps) Copying: 51/1024 [MB] (17 MBps) Copying: 68/1024 [MB] (17 MBps) Copying: 83/1024 [MB] (14 MBps) Copying: 102/1024 [MB] (18 MBps) Copying: 116/1024 [MB] (14 MBps) Copying: 132/1024 [MB] (16 MBps) Copying: 148/1024 [MB] (15 MBps) Copying: 164/1024 [MB] (16 MBps) Copying: 181/1024 [MB] (16 MBps) Copying: 196/1024 [MB] (15 MBps) Copying: 211/1024 [MB] (15 MBps) Copying: 228/1024 [MB] (17 MBps) Copying: 245/1024 [MB] (16 MBps) Copying: 263/1024 [MB] (17 MBps) Copying: 278/1024 [MB] (15 MBps) Copying: 295/1024 [MB] (17 MBps) Copying: 309/1024 [MB] (14 MBps) Copying: 326/1024 [MB] (16 MBps) Copying: 341/1024 [MB] (14 MBps) Copying: 357/1024 [MB] (16 MBps) Copying: 373/1024 [MB] (15 MBps) Copying: 391/1024 [MB] (18 MBps) Copying: 407/1024 [MB] (16 MBps) Copying: 425/1024 [MB] (17 MBps) Copying: 442/1024 [MB] (16 MBps) Copying: 459/1024 [MB] (16 MBps) Copying: 476/1024 [MB] (16 MBps) Copying: 493/1024 [MB] (17 MBps) Copying: 510/1024 [MB] (17 MBps) Copying: 526/1024 [MB] (15 MBps) Copying: 543/1024 [MB] (16 MBps) Copying: 562/1024 [MB] (19 MBps) Copying: 580/1024 [MB] (17 MBps) Copying: 597/1024 [MB] (17 MBps) Copying: 615/1024 [MB] (17 MBps) Copying: 633/1024 [MB] (18 MBps) Copying: 650/1024 [MB] (17 MBps) Copying: 667/1024 [MB] (16 MBps) Copying: 683/1024 [MB] (15 MBps) Copying: 701/1024 [MB] (18 MBps) Copying: 719/1024 [MB] (18 MBps) Copying: 737/1024 [MB] (18 MBps) Copying: 753/1024 [MB] (16 MBps) Copying: 769/1024 [MB] (15 MBps) Copying: 786/1024 [MB] (16 MBps) Copying: 802/1024 [MB] (15 MBps) Copying: 817/1024 [MB] (15 MBps) Copying: 833/1024 [MB] (15 MBps) Copying: 848/1024 [MB] (15 MBps) Copying: 864/1024 [MB] (16 MBps) Copying: 880/1024 [MB] (16 MBps) Copying: 896/1024 [MB] (15 MBps) Copying: 913/1024 [MB] (16 MBps) Copying: 930/1024 [MB] (17 MBps) Copying: 946/1024 [MB] (16 MBps) Copying: 962/1024 [MB] (15 MBps) Copying: 979/1024 [MB] (16 MBps) Copying: 996/1024 [MB] (17 MBps) Copying: 1013/1024 [MB] (16 MBps) Copying: 1024/1024 [MB] (average 16 MBps) 00:28:22.320 00:28:22.320 14:09:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:28:22.320 14:09:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:28:22.884 14:09:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:22.884 [2024-07-15 14:09:47.420038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.884 [2024-07-15 14:09:47.420103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:22.884 [2024-07-15 14:09:47.420141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:22.885 [2024-07-15 14:09:47.420155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.885 [2024-07-15 14:09:47.420202] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:22.885 [2024-07-15 14:09:47.423601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.885 [2024-07-15 14:09:47.423648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:22.885 [2024-07-15 14:09:47.423673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.373 ms 00:28:22.885 [2024-07-15 14:09:47.423695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.885 [2024-07-15 14:09:47.425489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.885 [2024-07-15 14:09:47.425546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:22.885 [2024-07-15 14:09:47.425566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.755 ms 00:28:22.885 [2024-07-15 14:09:47.425581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.143 [2024-07-15 14:09:47.440805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.143 [2024-07-15 14:09:47.440862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:23.143 [2024-07-15 14:09:47.440883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.193 ms 00:28:23.143 [2024-07-15 14:09:47.440899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.143 [2024-07-15 14:09:47.447738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.143 [2024-07-15 14:09:47.447787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:23.143 [2024-07-15 14:09:47.447805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.790 ms 00:28:23.143 [2024-07-15 14:09:47.447820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.143 [2024-07-15 14:09:47.479243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.143 [2024-07-15 14:09:47.479299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:23.143 [2024-07-15 14:09:47.479341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.313 ms 00:28:23.143 [2024-07-15 14:09:47.479357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.143 [2024-07-15 14:09:47.498931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.143 [2024-07-15 14:09:47.498994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:23.143 [2024-07-15 14:09:47.499018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.519 ms 00:28:23.143 [2024-07-15 14:09:47.499033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.143 [2024-07-15 14:09:47.499247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.143 [2024-07-15 14:09:47.499288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:23.143 [2024-07-15 14:09:47.499332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:28:23.143 [2024-07-15 14:09:47.499353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.143 [2024-07-15 14:09:47.531949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.143 [2024-07-15 14:09:47.532003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:28:23.143 [2024-07-15 14:09:47.532022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.555 ms 00:28:23.143 [2024-07-15 14:09:47.532037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.143 [2024-07-15 14:09:47.563854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.143 [2024-07-15 14:09:47.563908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:28:23.143 [2024-07-15 14:09:47.563927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.766 ms 00:28:23.143 [2024-07-15 14:09:47.563942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.143 [2024-07-15 14:09:47.595383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.143 [2024-07-15 14:09:47.595440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:23.143 [2024-07-15 14:09:47.595467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.389 ms 00:28:23.143 [2024-07-15 14:09:47.595493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.143 [2024-07-15 14:09:47.626681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.143 [2024-07-15 14:09:47.626733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:23.143 [2024-07-15 14:09:47.626753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.063 ms 00:28:23.143 [2024-07-15 14:09:47.626767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.143 [2024-07-15 14:09:47.626818] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:23.143 [2024-07-15 14:09:47.626856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.626872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.626887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.626900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.626914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.626927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.626946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.626958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.626976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.626989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:23.144 [2024-07-15 14:09:47.627824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.627836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.627850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.627862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.627877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.627889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.627907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.627919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.627934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.627946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.627961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.627973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.627988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.628000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.628015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.628027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.628042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.628054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.628068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.628080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.628095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.628107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.628123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.628136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.628150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.628163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.628177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.628189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.628203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.628216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.628231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.628243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.628259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.628272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:23.145 [2024-07-15 14:09:47.628296] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:23.145 [2024-07-15 14:09:47.628321] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f654a337-809f-45b2-9ca4-a55998feb384 00:28:23.145 [2024-07-15 14:09:47.628338] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:23.145 [2024-07-15 14:09:47.628349] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:23.145 [2024-07-15 14:09:47.628370] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:23.145 [2024-07-15 14:09:47.628382] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:23.145 [2024-07-15 14:09:47.628395] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:23.145 [2024-07-15 14:09:47.628407] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:23.145 [2024-07-15 14:09:47.628420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:23.145 [2024-07-15 14:09:47.628431] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:23.145 [2024-07-15 14:09:47.628443] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:23.145 [2024-07-15 14:09:47.628455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.145 [2024-07-15 14:09:47.628469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:23.145 [2024-07-15 14:09:47.628482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.640 ms 00:28:23.145 [2024-07-15 14:09:47.628495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.145 [2024-07-15 14:09:47.645502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.145 [2024-07-15 14:09:47.645552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:23.145 [2024-07-15 14:09:47.645571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.940 ms 00:28:23.145 [2024-07-15 14:09:47.645587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.145 [2024-07-15 14:09:47.646029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.145 [2024-07-15 14:09:47.646059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:23.145 [2024-07-15 14:09:47.646075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:28:23.145 [2024-07-15 14:09:47.646090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.403 [2024-07-15 14:09:47.700674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.403 [2024-07-15 14:09:47.700749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:23.403 [2024-07-15 14:09:47.700770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.403 [2024-07-15 14:09:47.700786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.403 [2024-07-15 14:09:47.700876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.403 [2024-07-15 14:09:47.700897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:23.403 [2024-07-15 14:09:47.700910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.403 [2024-07-15 14:09:47.700925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.403 [2024-07-15 14:09:47.701070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.403 [2024-07-15 14:09:47.701100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:23.403 [2024-07-15 14:09:47.701129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.403 [2024-07-15 14:09:47.701144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.403 [2024-07-15 14:09:47.701172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.403 [2024-07-15 14:09:47.701192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:23.403 [2024-07-15 14:09:47.701205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.403 [2024-07-15 14:09:47.701227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.403 [2024-07-15 14:09:47.804560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.403 [2024-07-15 14:09:47.804650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:23.403 [2024-07-15 14:09:47.804685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.403 [2024-07-15 14:09:47.804699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.403 [2024-07-15 14:09:47.891337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.403 [2024-07-15 14:09:47.891429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:23.403 [2024-07-15 14:09:47.891450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.403 [2024-07-15 14:09:47.891466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.403 [2024-07-15 14:09:47.891582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.403 [2024-07-15 14:09:47.891607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:23.403 [2024-07-15 14:09:47.891625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.403 [2024-07-15 14:09:47.891650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.403 [2024-07-15 14:09:47.891717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.403 [2024-07-15 14:09:47.891742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:23.403 [2024-07-15 14:09:47.891757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.403 [2024-07-15 14:09:47.891771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.403 [2024-07-15 14:09:47.891894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.403 [2024-07-15 14:09:47.891936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:23.403 [2024-07-15 14:09:47.891950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.404 [2024-07-15 14:09:47.891968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.404 [2024-07-15 14:09:47.892030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.404 [2024-07-15 14:09:47.892053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:23.404 [2024-07-15 14:09:47.892067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.404 [2024-07-15 14:09:47.892081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.404 [2024-07-15 14:09:47.892128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.404 [2024-07-15 14:09:47.892147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:23.404 [2024-07-15 14:09:47.892160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.404 [2024-07-15 14:09:47.892177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.404 [2024-07-15 14:09:47.892234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.404 [2024-07-15 14:09:47.892258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:23.404 [2024-07-15 14:09:47.892272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.404 [2024-07-15 14:09:47.892286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.404 [2024-07-15 14:09:47.892483] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 472.408 ms, result 0 00:28:23.404 true 00:28:23.404 14:09:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 83640 00:28:23.404 14:09:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid83640 00:28:23.404 14:09:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:28:23.661 [2024-07-15 14:09:48.027705] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:28:23.661 [2024-07-15 14:09:48.027871] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84527 ] 00:28:23.661 [2024-07-15 14:09:48.198212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.965 [2024-07-15 14:09:48.424060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.852  Copying: 158/1024 [MB] (158 MBps) Copying: 321/1024 [MB] (163 MBps) Copying: 486/1024 [MB] (165 MBps) Copying: 655/1024 [MB] (168 MBps) Copying: 826/1024 [MB] (171 MBps) Copying: 995/1024 [MB] (168 MBps) Copying: 1024/1024 [MB] (average 166 MBps) 00:28:31.852 00:28:31.852 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 83640 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:28:31.852 14:09:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:31.852 [2024-07-15 14:09:56.108825] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:28:31.852 [2024-07-15 14:09:56.108997] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84611 ] 00:28:31.852 [2024-07-15 14:09:56.279672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.109 [2024-07-15 14:09:56.486908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.366 [2024-07-15 14:09:56.797417] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:32.366 [2024-07-15 14:09:56.797495] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:32.366 [2024-07-15 14:09:56.864057] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:28:32.366 [2024-07-15 14:09:56.864363] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:28:32.366 [2024-07-15 14:09:56.864562] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:28:32.624 [2024-07-15 14:09:57.107902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.624 [2024-07-15 14:09:57.107968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:32.624 [2024-07-15 14:09:57.107991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:32.624 [2024-07-15 14:09:57.108003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.624 [2024-07-15 14:09:57.108081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.624 [2024-07-15 14:09:57.108104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:32.624 [2024-07-15 14:09:57.108117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:28:32.624 [2024-07-15 14:09:57.108134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.624 [2024-07-15 14:09:57.108166] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:32.624 [2024-07-15 14:09:57.109099] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:32.624 [2024-07-15 14:09:57.109144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.624 [2024-07-15 14:09:57.109159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:32.624 [2024-07-15 14:09:57.109172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:28:32.624 [2024-07-15 14:09:57.109184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.624 [2024-07-15 14:09:57.110378] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:32.625 [2024-07-15 14:09:57.127437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.625 [2024-07-15 14:09:57.127496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:32.625 [2024-07-15 14:09:57.127544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.060 ms 00:28:32.625 [2024-07-15 14:09:57.127563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.625 [2024-07-15 14:09:57.127665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.625 [2024-07-15 14:09:57.127685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:32.625 [2024-07-15 14:09:57.127698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:32.625 [2024-07-15 14:09:57.127708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.625 [2024-07-15 14:09:57.132759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.625 [2024-07-15 14:09:57.132952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:32.625 [2024-07-15 14:09:57.133093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.960 ms 00:28:32.625 [2024-07-15 14:09:57.133146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.625 [2024-07-15 14:09:57.133405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.625 [2024-07-15 14:09:57.133466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:32.625 [2024-07-15 14:09:57.133510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:28:32.625 [2024-07-15 14:09:57.133604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.625 [2024-07-15 14:09:57.133723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.625 [2024-07-15 14:09:57.133778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:32.625 [2024-07-15 14:09:57.133951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:28:32.625 [2024-07-15 14:09:57.134015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.625 [2024-07-15 14:09:57.134200] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:32.625 [2024-07-15 14:09:57.138685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.625 [2024-07-15 14:09:57.138844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:32.625 [2024-07-15 14:09:57.138969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.497 ms 00:28:32.625 [2024-07-15 14:09:57.139094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.625 [2024-07-15 14:09:57.139192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.625 [2024-07-15 14:09:57.139354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:32.625 [2024-07-15 14:09:57.139470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:32.625 [2024-07-15 14:09:57.139522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.625 [2024-07-15 14:09:57.139674] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:32.625 [2024-07-15 14:09:57.139753] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:32.625 [2024-07-15 14:09:57.139864] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:32.625 [2024-07-15 14:09:57.139982] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:28:32.625 [2024-07-15 14:09:57.140142] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:32.625 [2024-07-15 14:09:57.140325] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:32.625 [2024-07-15 14:09:57.140397] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:28:32.625 [2024-07-15 14:09:57.140460] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:32.625 [2024-07-15 14:09:57.140613] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:32.625 [2024-07-15 14:09:57.140684] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:32.625 [2024-07-15 14:09:57.140781] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:32.625 [2024-07-15 14:09:57.140881] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:32.625 [2024-07-15 14:09:57.140930] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:32.625 [2024-07-15 14:09:57.141018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.625 [2024-07-15 14:09:57.141067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:32.625 [2024-07-15 14:09:57.141107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.347 ms 00:28:32.625 [2024-07-15 14:09:57.141145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.625 [2024-07-15 14:09:57.141274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.625 [2024-07-15 14:09:57.141345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:32.625 [2024-07-15 14:09:57.141390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:28:32.625 [2024-07-15 14:09:57.141429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.625 [2024-07-15 14:09:57.141611] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:32.625 [2024-07-15 14:09:57.141794] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:32.625 [2024-07-15 14:09:57.141852] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:32.625 [2024-07-15 14:09:57.142035] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:32.625 [2024-07-15 14:09:57.142092] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:32.625 [2024-07-15 14:09:57.142157] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:32.625 [2024-07-15 14:09:57.142202] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:32.625 [2024-07-15 14:09:57.142239] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:32.625 [2024-07-15 14:09:57.142276] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:32.625 [2024-07-15 14:09:57.142333] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:32.625 [2024-07-15 14:09:57.142389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:32.625 [2024-07-15 14:09:57.142437] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:32.625 [2024-07-15 14:09:57.142475] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:32.625 [2024-07-15 14:09:57.142512] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:32.625 [2024-07-15 14:09:57.142553] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:32.625 [2024-07-15 14:09:57.142653] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:32.625 [2024-07-15 14:09:57.142714] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:32.625 [2024-07-15 14:09:57.142753] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:32.625 [2024-07-15 14:09:57.142790] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:32.625 [2024-07-15 14:09:57.142829] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:32.625 [2024-07-15 14:09:57.142907] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:32.625 [2024-07-15 14:09:57.142944] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:32.625 [2024-07-15 14:09:57.142981] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:32.625 [2024-07-15 14:09:57.143018] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:32.625 [2024-07-15 14:09:57.143055] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:32.625 [2024-07-15 14:09:57.143133] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:32.625 [2024-07-15 14:09:57.143185] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:32.625 [2024-07-15 14:09:57.143222] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:32.625 [2024-07-15 14:09:57.143259] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:32.625 [2024-07-15 14:09:57.143295] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:32.625 [2024-07-15 14:09:57.143380] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:32.625 [2024-07-15 14:09:57.143515] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:32.625 [2024-07-15 14:09:57.143552] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:32.625 [2024-07-15 14:09:57.143590] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:32.625 [2024-07-15 14:09:57.143628] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:32.625 [2024-07-15 14:09:57.143767] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:32.625 [2024-07-15 14:09:57.143789] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:32.625 [2024-07-15 14:09:57.143802] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:32.625 [2024-07-15 14:09:57.143813] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:32.625 [2024-07-15 14:09:57.143823] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:32.625 [2024-07-15 14:09:57.143833] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:32.625 [2024-07-15 14:09:57.143843] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:32.625 [2024-07-15 14:09:57.143853] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:32.625 [2024-07-15 14:09:57.143863] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:32.625 [2024-07-15 14:09:57.143875] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:32.625 [2024-07-15 14:09:57.143886] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:32.625 [2024-07-15 14:09:57.143897] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:32.626 [2024-07-15 14:09:57.143908] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:32.626 [2024-07-15 14:09:57.143919] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:32.626 [2024-07-15 14:09:57.143929] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:32.626 [2024-07-15 14:09:57.143940] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:32.626 [2024-07-15 14:09:57.143950] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:32.626 [2024-07-15 14:09:57.143960] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:32.626 [2024-07-15 14:09:57.143972] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:32.626 [2024-07-15 14:09:57.143994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:32.626 [2024-07-15 14:09:57.144008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:32.626 [2024-07-15 14:09:57.144019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:32.626 [2024-07-15 14:09:57.144030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:32.626 [2024-07-15 14:09:57.144041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:32.626 [2024-07-15 14:09:57.144053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:32.626 [2024-07-15 14:09:57.144064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:32.626 [2024-07-15 14:09:57.144075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:32.626 [2024-07-15 14:09:57.144086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:32.626 [2024-07-15 14:09:57.144097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:32.626 [2024-07-15 14:09:57.144109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:32.626 [2024-07-15 14:09:57.144120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:32.626 [2024-07-15 14:09:57.144133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:32.626 [2024-07-15 14:09:57.144144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:32.626 [2024-07-15 14:09:57.144156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:32.626 [2024-07-15 14:09:57.144167] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:32.626 [2024-07-15 14:09:57.144180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:32.626 [2024-07-15 14:09:57.144193] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:32.626 [2024-07-15 14:09:57.144205] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:32.626 [2024-07-15 14:09:57.144216] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:32.626 [2024-07-15 14:09:57.144228] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:32.626 [2024-07-15 14:09:57.144242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.626 [2024-07-15 14:09:57.144254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:32.626 [2024-07-15 14:09:57.144266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.697 ms 00:28:32.626 [2024-07-15 14:09:57.144277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.884 [2024-07-15 14:09:57.188606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.884 [2024-07-15 14:09:57.188671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:32.884 [2024-07-15 14:09:57.188693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.224 ms 00:28:32.884 [2024-07-15 14:09:57.188705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.884 [2024-07-15 14:09:57.188827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.884 [2024-07-15 14:09:57.188845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:32.884 [2024-07-15 14:09:57.188858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:28:32.884 [2024-07-15 14:09:57.188877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.884 [2024-07-15 14:09:57.228752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.884 [2024-07-15 14:09:57.228813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:32.884 [2024-07-15 14:09:57.228833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.781 ms 00:28:32.884 [2024-07-15 14:09:57.228845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.884 [2024-07-15 14:09:57.228912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.884 [2024-07-15 14:09:57.228936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:32.884 [2024-07-15 14:09:57.228949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:32.884 [2024-07-15 14:09:57.228960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.884 [2024-07-15 14:09:57.229368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.884 [2024-07-15 14:09:57.229390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:32.884 [2024-07-15 14:09:57.229403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:28:32.884 [2024-07-15 14:09:57.229414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.884 [2024-07-15 14:09:57.229574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.884 [2024-07-15 14:09:57.229601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:32.884 [2024-07-15 14:09:57.229619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:28:32.884 [2024-07-15 14:09:57.229630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.884 [2024-07-15 14:09:57.246053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.884 [2024-07-15 14:09:57.246102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:32.884 [2024-07-15 14:09:57.246120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.396 ms 00:28:32.884 [2024-07-15 14:09:57.246131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.884 [2024-07-15 14:09:57.263211] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:32.884 [2024-07-15 14:09:57.263259] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:32.884 [2024-07-15 14:09:57.263278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.884 [2024-07-15 14:09:57.263315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:32.884 [2024-07-15 14:09:57.263332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.016 ms 00:28:32.884 [2024-07-15 14:09:57.263343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.884 [2024-07-15 14:09:57.293797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.884 [2024-07-15 14:09:57.293863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:32.884 [2024-07-15 14:09:57.293882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.405 ms 00:28:32.884 [2024-07-15 14:09:57.293894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.884 [2024-07-15 14:09:57.309763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.884 [2024-07-15 14:09:57.309833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:32.884 [2024-07-15 14:09:57.309862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.801 ms 00:28:32.884 [2024-07-15 14:09:57.309883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.884 [2024-07-15 14:09:57.326578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.884 [2024-07-15 14:09:57.326628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:32.884 [2024-07-15 14:09:57.326646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.617 ms 00:28:32.884 [2024-07-15 14:09:57.326658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.884 [2024-07-15 14:09:57.327492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.884 [2024-07-15 14:09:57.327533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:32.884 [2024-07-15 14:09:57.327555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.697 ms 00:28:32.884 [2024-07-15 14:09:57.327567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.884 [2024-07-15 14:09:57.402132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.884 [2024-07-15 14:09:57.402212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:32.884 [2024-07-15 14:09:57.402233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.537 ms 00:28:32.884 [2024-07-15 14:09:57.402245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.884 [2024-07-15 14:09:57.415316] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:32.884 [2024-07-15 14:09:57.417980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.884 [2024-07-15 14:09:57.418018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:32.884 [2024-07-15 14:09:57.418035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.652 ms 00:28:32.884 [2024-07-15 14:09:57.418047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.884 [2024-07-15 14:09:57.418154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.884 [2024-07-15 14:09:57.418175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:32.884 [2024-07-15 14:09:57.418192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:32.884 [2024-07-15 14:09:57.418204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.884 [2024-07-15 14:09:57.418297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.884 [2024-07-15 14:09:57.418332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:32.884 [2024-07-15 14:09:57.418345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:28:32.884 [2024-07-15 14:09:57.418356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.884 [2024-07-15 14:09:57.418391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.884 [2024-07-15 14:09:57.418406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:32.884 [2024-07-15 14:09:57.418418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:32.884 [2024-07-15 14:09:57.418435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.884 [2024-07-15 14:09:57.418474] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:32.884 [2024-07-15 14:09:57.418492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.884 [2024-07-15 14:09:57.418504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:32.884 [2024-07-15 14:09:57.418515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:32.884 [2024-07-15 14:09:57.418526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.142 [2024-07-15 14:09:57.450731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.142 [2024-07-15 14:09:57.450901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:33.142 [2024-07-15 14:09:57.451032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.181 ms 00:28:33.142 [2024-07-15 14:09:57.451093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.142 [2024-07-15 14:09:57.451347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.142 [2024-07-15 14:09:57.451420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:33.142 [2024-07-15 14:09:57.451469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:28:33.142 [2024-07-15 14:09:57.451553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.142 [2024-07-15 14:09:57.452755] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 344.328 ms, result 0 00:29:11.518  Copying: 28/1024 [MB] (28 MBps) Copying: 56/1024 [MB] (27 MBps) Copying: 83/1024 [MB] (27 MBps) Copying: 112/1024 [MB] (28 MBps) Copying: 140/1024 [MB] (28 MBps) Copying: 168/1024 [MB] (27 MBps) Copying: 196/1024 [MB] (27 MBps) Copying: 223/1024 [MB] (27 MBps) Copying: 251/1024 [MB] (27 MBps) Copying: 279/1024 [MB] (27 MBps) Copying: 307/1024 [MB] (27 MBps) Copying: 334/1024 [MB] (27 MBps) Copying: 361/1024 [MB] (26 MBps) Copying: 388/1024 [MB] (27 MBps) Copying: 416/1024 [MB] (28 MBps) Copying: 444/1024 [MB] (27 MBps) Copying: 473/1024 [MB] (28 MBps) Copying: 501/1024 [MB] (28 MBps) Copying: 529/1024 [MB] (27 MBps) Copying: 555/1024 [MB] (26 MBps) Copying: 582/1024 [MB] (26 MBps) Copying: 608/1024 [MB] (26 MBps) Copying: 634/1024 [MB] (26 MBps) Copying: 660/1024 [MB] (26 MBps) Copying: 686/1024 [MB] (25 MBps) Copying: 712/1024 [MB] (26 MBps) Copying: 739/1024 [MB] (26 MBps) Copying: 765/1024 [MB] (26 MBps) Copying: 793/1024 [MB] (27 MBps) Copying: 820/1024 [MB] (27 MBps) Copying: 847/1024 [MB] (27 MBps) Copying: 875/1024 [MB] (28 MBps) Copying: 904/1024 [MB] (28 MBps) Copying: 931/1024 [MB] (27 MBps) Copying: 959/1024 [MB] (28 MBps) Copying: 986/1024 [MB] (27 MBps) Copying: 1015/1024 [MB] (28 MBps) Copying: 1048160/1048576 [kB] (8240 kBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-15 14:10:36.018635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.518 [2024-07-15 14:10:36.018724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:11.518 [2024-07-15 14:10:36.018746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:11.518 [2024-07-15 14:10:36.018758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.518 [2024-07-15 14:10:36.021015] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:11.518 [2024-07-15 14:10:36.028511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.518 [2024-07-15 14:10:36.028553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:11.518 [2024-07-15 14:10:36.028570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.446 ms 00:29:11.518 [2024-07-15 14:10:36.028582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.518 [2024-07-15 14:10:36.041166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.518 [2024-07-15 14:10:36.041216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:11.518 [2024-07-15 14:10:36.041244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.356 ms 00:29:11.518 [2024-07-15 14:10:36.041256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.518 [2024-07-15 14:10:36.063794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.518 [2024-07-15 14:10:36.063840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:11.518 [2024-07-15 14:10:36.063859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.515 ms 00:29:11.518 [2024-07-15 14:10:36.063872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.776 [2024-07-15 14:10:36.070714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.776 [2024-07-15 14:10:36.070748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:11.776 [2024-07-15 14:10:36.070763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.803 ms 00:29:11.776 [2024-07-15 14:10:36.070783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.776 [2024-07-15 14:10:36.102525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.776 [2024-07-15 14:10:36.102575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:11.776 [2024-07-15 14:10:36.102600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.672 ms 00:29:11.776 [2024-07-15 14:10:36.102613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.776 [2024-07-15 14:10:36.120658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.776 [2024-07-15 14:10:36.120708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:11.776 [2024-07-15 14:10:36.120727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.996 ms 00:29:11.776 [2024-07-15 14:10:36.120739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.776 [2024-07-15 14:10:36.213502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.776 [2024-07-15 14:10:36.213571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:11.776 [2024-07-15 14:10:36.213592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.707 ms 00:29:11.776 [2024-07-15 14:10:36.213604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.776 [2024-07-15 14:10:36.245414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.776 [2024-07-15 14:10:36.245468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:29:11.776 [2024-07-15 14:10:36.245486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.786 ms 00:29:11.776 [2024-07-15 14:10:36.245498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.776 [2024-07-15 14:10:36.276687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.776 [2024-07-15 14:10:36.276733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:29:11.776 [2024-07-15 14:10:36.276751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.139 ms 00:29:11.776 [2024-07-15 14:10:36.276762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.776 [2024-07-15 14:10:36.307857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.776 [2024-07-15 14:10:36.307901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:11.776 [2024-07-15 14:10:36.307918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.049 ms 00:29:11.777 [2024-07-15 14:10:36.307929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.036 [2024-07-15 14:10:36.338474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.036 [2024-07-15 14:10:36.338526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:12.036 [2024-07-15 14:10:36.338544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.447 ms 00:29:12.036 [2024-07-15 14:10:36.338556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.036 [2024-07-15 14:10:36.338608] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:12.036 [2024-07-15 14:10:36.338633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130304 / 261120 wr_cnt: 1 state: open 00:29:12.036 [2024-07-15 14:10:36.338648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:12.036 [2024-07-15 14:10:36.338659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:12.036 [2024-07-15 14:10:36.338672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:12.036 [2024-07-15 14:10:36.338683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:12.036 [2024-07-15 14:10:36.338695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:12.036 [2024-07-15 14:10:36.338706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:12.036 [2024-07-15 14:10:36.338718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:12.036 [2024-07-15 14:10:36.338729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:12.036 [2024-07-15 14:10:36.338741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:12.036 [2024-07-15 14:10:36.338753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:12.036 [2024-07-15 14:10:36.338764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:12.036 [2024-07-15 14:10:36.338776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:12.036 [2024-07-15 14:10:36.338788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:12.036 [2024-07-15 14:10:36.338799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:12.036 [2024-07-15 14:10:36.338811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:12.036 [2024-07-15 14:10:36.338822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:12.036 [2024-07-15 14:10:36.338834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:12.036 [2024-07-15 14:10:36.338846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:12.036 [2024-07-15 14:10:36.338857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:12.036 [2024-07-15 14:10:36.338869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:12.036 [2024-07-15 14:10:36.338880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:12.036 [2024-07-15 14:10:36.338892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:12.036 [2024-07-15 14:10:36.338903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:12.036 [2024-07-15 14:10:36.338915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:12.036 [2024-07-15 14:10:36.338926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:12.036 [2024-07-15 14:10:36.338939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:12.036 [2024-07-15 14:10:36.338955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:12.036 [2024-07-15 14:10:36.338966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:12.036 [2024-07-15 14:10:36.338978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:12.036 [2024-07-15 14:10:36.338990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:12.036 [2024-07-15 14:10:36.339001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:12.037 [2024-07-15 14:10:36.339854] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:12.037 [2024-07-15 14:10:36.339865] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f654a337-809f-45b2-9ca4-a55998feb384 00:29:12.037 [2024-07-15 14:10:36.339877] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130304 00:29:12.037 [2024-07-15 14:10:36.339888] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 131264 00:29:12.037 [2024-07-15 14:10:36.339906] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130304 00:29:12.037 [2024-07-15 14:10:36.339922] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:29:12.037 [2024-07-15 14:10:36.339932] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:12.037 [2024-07-15 14:10:36.339944] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:12.037 [2024-07-15 14:10:36.339954] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:12.037 [2024-07-15 14:10:36.339964] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:12.037 [2024-07-15 14:10:36.339974] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:12.037 [2024-07-15 14:10:36.339985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.037 [2024-07-15 14:10:36.339997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:12.037 [2024-07-15 14:10:36.340021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.378 ms 00:29:12.037 [2024-07-15 14:10:36.340033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.037 [2024-07-15 14:10:36.356441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.037 [2024-07-15 14:10:36.356484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:12.037 [2024-07-15 14:10:36.356509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.363 ms 00:29:12.037 [2024-07-15 14:10:36.356521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.037 [2024-07-15 14:10:36.356947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.037 [2024-07-15 14:10:36.356979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:12.037 [2024-07-15 14:10:36.356994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:29:12.037 [2024-07-15 14:10:36.357005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.037 [2024-07-15 14:10:36.393735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.037 [2024-07-15 14:10:36.393789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:12.037 [2024-07-15 14:10:36.393805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.038 [2024-07-15 14:10:36.393817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.038 [2024-07-15 14:10:36.393891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.038 [2024-07-15 14:10:36.393906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:12.038 [2024-07-15 14:10:36.393918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.038 [2024-07-15 14:10:36.393929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.038 [2024-07-15 14:10:36.394024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.038 [2024-07-15 14:10:36.394049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:12.038 [2024-07-15 14:10:36.394062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.038 [2024-07-15 14:10:36.394073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.038 [2024-07-15 14:10:36.394096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.038 [2024-07-15 14:10:36.394118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:12.038 [2024-07-15 14:10:36.394130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.038 [2024-07-15 14:10:36.394141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.038 [2024-07-15 14:10:36.492911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.038 [2024-07-15 14:10:36.492989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:12.038 [2024-07-15 14:10:36.493009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.038 [2024-07-15 14:10:36.493020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.316 [2024-07-15 14:10:36.583701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.316 [2024-07-15 14:10:36.583762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:12.316 [2024-07-15 14:10:36.583781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.316 [2024-07-15 14:10:36.583793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.316 [2024-07-15 14:10:36.583872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.316 [2024-07-15 14:10:36.583889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:12.316 [2024-07-15 14:10:36.583909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.316 [2024-07-15 14:10:36.583920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.316 [2024-07-15 14:10:36.583963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.316 [2024-07-15 14:10:36.583978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:12.316 [2024-07-15 14:10:36.583990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.316 [2024-07-15 14:10:36.584000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.316 [2024-07-15 14:10:36.584116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.316 [2024-07-15 14:10:36.584135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:12.316 [2024-07-15 14:10:36.584147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.316 [2024-07-15 14:10:36.584164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.316 [2024-07-15 14:10:36.584216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.316 [2024-07-15 14:10:36.584234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:12.316 [2024-07-15 14:10:36.584246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.316 [2024-07-15 14:10:36.584256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.316 [2024-07-15 14:10:36.584325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.316 [2024-07-15 14:10:36.584345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:12.316 [2024-07-15 14:10:36.584358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.316 [2024-07-15 14:10:36.584375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.316 [2024-07-15 14:10:36.584428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.316 [2024-07-15 14:10:36.584444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:12.316 [2024-07-15 14:10:36.584457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.316 [2024-07-15 14:10:36.584468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.316 [2024-07-15 14:10:36.584617] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 566.950 ms, result 0 00:29:14.219 00:29:14.219 00:29:14.219 14:10:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:16.121 14:10:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:16.378 [2024-07-15 14:10:40.753979] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:29:16.378 [2024-07-15 14:10:40.754113] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85049 ] 00:29:16.378 [2024-07-15 14:10:40.923793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.946 [2024-07-15 14:10:41.190018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.217 [2024-07-15 14:10:41.518225] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:17.217 [2024-07-15 14:10:41.518344] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:17.217 [2024-07-15 14:10:41.682798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.217 [2024-07-15 14:10:41.682857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:17.217 [2024-07-15 14:10:41.682879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:17.217 [2024-07-15 14:10:41.682892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.217 [2024-07-15 14:10:41.682983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.217 [2024-07-15 14:10:41.683004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:17.217 [2024-07-15 14:10:41.683033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:29:17.217 [2024-07-15 14:10:41.683048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.217 [2024-07-15 14:10:41.683095] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:17.217 [2024-07-15 14:10:41.684081] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:17.217 [2024-07-15 14:10:41.684125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.217 [2024-07-15 14:10:41.684144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:17.217 [2024-07-15 14:10:41.684157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.036 ms 00:29:17.217 [2024-07-15 14:10:41.684170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.217 [2024-07-15 14:10:41.685395] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:17.217 [2024-07-15 14:10:41.703791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.217 [2024-07-15 14:10:41.703833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:17.217 [2024-07-15 14:10:41.703851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.398 ms 00:29:17.217 [2024-07-15 14:10:41.703863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.217 [2024-07-15 14:10:41.703945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.217 [2024-07-15 14:10:41.703979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:17.217 [2024-07-15 14:10:41.703995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:17.217 [2024-07-15 14:10:41.704006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.217 [2024-07-15 14:10:41.708999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.217 [2024-07-15 14:10:41.709055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:17.217 [2024-07-15 14:10:41.709071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.878 ms 00:29:17.217 [2024-07-15 14:10:41.709083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.217 [2024-07-15 14:10:41.709204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.217 [2024-07-15 14:10:41.709222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:17.217 [2024-07-15 14:10:41.709249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:29:17.217 [2024-07-15 14:10:41.709260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.217 [2024-07-15 14:10:41.709350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.217 [2024-07-15 14:10:41.709367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:17.217 [2024-07-15 14:10:41.709423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:29:17.217 [2024-07-15 14:10:41.709436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.217 [2024-07-15 14:10:41.709472] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:17.217 [2024-07-15 14:10:41.714117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.217 [2024-07-15 14:10:41.714154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:17.217 [2024-07-15 14:10:41.714170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.654 ms 00:29:17.217 [2024-07-15 14:10:41.714180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.217 [2024-07-15 14:10:41.714244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.217 [2024-07-15 14:10:41.714274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:17.217 [2024-07-15 14:10:41.714301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:17.217 [2024-07-15 14:10:41.714312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.217 [2024-07-15 14:10:41.714424] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:17.217 [2024-07-15 14:10:41.714529] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:17.217 [2024-07-15 14:10:41.714577] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:17.217 [2024-07-15 14:10:41.714619] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:29:17.217 [2024-07-15 14:10:41.714728] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:17.217 [2024-07-15 14:10:41.714744] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:17.217 [2024-07-15 14:10:41.714758] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:17.217 [2024-07-15 14:10:41.714773] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:17.217 [2024-07-15 14:10:41.714786] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:17.217 [2024-07-15 14:10:41.714798] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:17.217 [2024-07-15 14:10:41.714809] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:17.217 [2024-07-15 14:10:41.714820] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:17.217 [2024-07-15 14:10:41.714830] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:17.217 [2024-07-15 14:10:41.714842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.217 [2024-07-15 14:10:41.714859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:17.217 [2024-07-15 14:10:41.714871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:29:17.217 [2024-07-15 14:10:41.714883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.217 [2024-07-15 14:10:41.715010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.217 [2024-07-15 14:10:41.715025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:17.217 [2024-07-15 14:10:41.715037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:29:17.217 [2024-07-15 14:10:41.715047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.217 [2024-07-15 14:10:41.715154] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:17.217 [2024-07-15 14:10:41.715170] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:17.217 [2024-07-15 14:10:41.715186] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:17.217 [2024-07-15 14:10:41.715198] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:17.217 [2024-07-15 14:10:41.715210] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:17.217 [2024-07-15 14:10:41.715220] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:17.217 [2024-07-15 14:10:41.715230] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:17.217 [2024-07-15 14:10:41.715240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:17.217 [2024-07-15 14:10:41.715250] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:17.217 [2024-07-15 14:10:41.715261] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:17.217 [2024-07-15 14:10:41.715271] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:17.217 [2024-07-15 14:10:41.715281] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:17.217 [2024-07-15 14:10:41.715291] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:17.217 [2024-07-15 14:10:41.715301] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:17.217 [2024-07-15 14:10:41.715326] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:17.217 [2024-07-15 14:10:41.715335] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:17.217 [2024-07-15 14:10:41.715358] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:17.217 [2024-07-15 14:10:41.715371] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:17.217 [2024-07-15 14:10:41.715381] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:17.217 [2024-07-15 14:10:41.715391] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:17.217 [2024-07-15 14:10:41.715414] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:17.217 [2024-07-15 14:10:41.715425] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:17.218 [2024-07-15 14:10:41.715435] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:17.218 [2024-07-15 14:10:41.715444] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:17.218 [2024-07-15 14:10:41.715454] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:17.218 [2024-07-15 14:10:41.715464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:17.218 [2024-07-15 14:10:41.715488] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:17.218 [2024-07-15 14:10:41.715497] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:17.218 [2024-07-15 14:10:41.715506] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:17.218 [2024-07-15 14:10:41.715515] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:17.218 [2024-07-15 14:10:41.715525] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:17.218 [2024-07-15 14:10:41.715534] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:17.218 [2024-07-15 14:10:41.715544] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:17.218 [2024-07-15 14:10:41.715553] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:17.218 [2024-07-15 14:10:41.715578] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:17.218 [2024-07-15 14:10:41.715588] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:17.218 [2024-07-15 14:10:41.715598] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:17.218 [2024-07-15 14:10:41.715607] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:17.218 [2024-07-15 14:10:41.715617] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:17.218 [2024-07-15 14:10:41.715627] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:17.218 [2024-07-15 14:10:41.715637] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:17.218 [2024-07-15 14:10:41.715646] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:17.218 [2024-07-15 14:10:41.715656] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:17.218 [2024-07-15 14:10:41.715665] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:17.218 [2024-07-15 14:10:41.715676] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:17.218 [2024-07-15 14:10:41.715686] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:17.218 [2024-07-15 14:10:41.715696] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:17.218 [2024-07-15 14:10:41.715707] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:17.218 [2024-07-15 14:10:41.715717] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:17.218 [2024-07-15 14:10:41.715726] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:17.218 [2024-07-15 14:10:41.715737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:17.218 [2024-07-15 14:10:41.715746] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:17.218 [2024-07-15 14:10:41.715756] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:17.218 [2024-07-15 14:10:41.715769] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:17.218 [2024-07-15 14:10:41.715782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:17.218 [2024-07-15 14:10:41.715794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:17.218 [2024-07-15 14:10:41.715805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:17.218 [2024-07-15 14:10:41.715815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:17.218 [2024-07-15 14:10:41.715826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:17.218 [2024-07-15 14:10:41.715837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:17.218 [2024-07-15 14:10:41.715847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:17.218 [2024-07-15 14:10:41.715858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:17.218 [2024-07-15 14:10:41.715869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:17.218 [2024-07-15 14:10:41.715879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:17.218 [2024-07-15 14:10:41.715890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:17.218 [2024-07-15 14:10:41.715915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:17.218 [2024-07-15 14:10:41.715926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:17.218 [2024-07-15 14:10:41.715936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:17.218 [2024-07-15 14:10:41.715946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:17.218 [2024-07-15 14:10:41.715957] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:17.218 [2024-07-15 14:10:41.715969] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:17.218 [2024-07-15 14:10:41.716000] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:17.218 [2024-07-15 14:10:41.716011] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:17.218 [2024-07-15 14:10:41.716021] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:17.218 [2024-07-15 14:10:41.716032] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:17.218 [2024-07-15 14:10:41.716045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.218 [2024-07-15 14:10:41.716056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:17.218 [2024-07-15 14:10:41.716066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.959 ms 00:29:17.218 [2024-07-15 14:10:41.716077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.218 [2024-07-15 14:10:41.760059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.218 [2024-07-15 14:10:41.760133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:17.218 [2024-07-15 14:10:41.760156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.907 ms 00:29:17.218 [2024-07-15 14:10:41.760167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.218 [2024-07-15 14:10:41.760293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.218 [2024-07-15 14:10:41.760324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:17.218 [2024-07-15 14:10:41.760336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:29:17.218 [2024-07-15 14:10:41.760406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.477 [2024-07-15 14:10:41.802446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.477 [2024-07-15 14:10:41.802515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:17.477 [2024-07-15 14:10:41.802550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.946 ms 00:29:17.477 [2024-07-15 14:10:41.802562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.477 [2024-07-15 14:10:41.802662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.477 [2024-07-15 14:10:41.802680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:17.477 [2024-07-15 14:10:41.802693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:17.477 [2024-07-15 14:10:41.802710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.477 [2024-07-15 14:10:41.803081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.477 [2024-07-15 14:10:41.803100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:17.477 [2024-07-15 14:10:41.803113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:29:17.477 [2024-07-15 14:10:41.803124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.477 [2024-07-15 14:10:41.803273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.477 [2024-07-15 14:10:41.803292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:17.477 [2024-07-15 14:10:41.803320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:29:17.477 [2024-07-15 14:10:41.803331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.477 [2024-07-15 14:10:41.820303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.477 [2024-07-15 14:10:41.820386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:17.477 [2024-07-15 14:10:41.820413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.909 ms 00:29:17.477 [2024-07-15 14:10:41.820426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.477 [2024-07-15 14:10:41.837315] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:29:17.477 [2024-07-15 14:10:41.837376] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:17.477 [2024-07-15 14:10:41.837406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.477 [2024-07-15 14:10:41.837418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:17.477 [2024-07-15 14:10:41.837432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.853 ms 00:29:17.477 [2024-07-15 14:10:41.837443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.477 [2024-07-15 14:10:41.869372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.477 [2024-07-15 14:10:41.869428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:17.477 [2024-07-15 14:10:41.869469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.882 ms 00:29:17.477 [2024-07-15 14:10:41.869482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.477 [2024-07-15 14:10:41.886811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.477 [2024-07-15 14:10:41.886856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:17.477 [2024-07-15 14:10:41.886874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.261 ms 00:29:17.477 [2024-07-15 14:10:41.886885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.477 [2024-07-15 14:10:41.903490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.477 [2024-07-15 14:10:41.903549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:17.477 [2024-07-15 14:10:41.903567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.558 ms 00:29:17.477 [2024-07-15 14:10:41.903579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.477 [2024-07-15 14:10:41.904496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.477 [2024-07-15 14:10:41.904547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:17.477 [2024-07-15 14:10:41.904563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.772 ms 00:29:17.477 [2024-07-15 14:10:41.904574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.477 [2024-07-15 14:10:41.982013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.477 [2024-07-15 14:10:41.982088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:17.477 [2024-07-15 14:10:41.982108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.409 ms 00:29:17.477 [2024-07-15 14:10:41.982119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.477 [2024-07-15 14:10:41.995570] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:17.477 [2024-07-15 14:10:41.998235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.477 [2024-07-15 14:10:41.998273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:17.477 [2024-07-15 14:10:41.998291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.017 ms 00:29:17.477 [2024-07-15 14:10:41.998351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.477 [2024-07-15 14:10:41.998454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.477 [2024-07-15 14:10:41.998474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:17.477 [2024-07-15 14:10:41.998487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:17.477 [2024-07-15 14:10:41.998498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.478 [2024-07-15 14:10:42.000304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.478 [2024-07-15 14:10:42.000387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:17.478 [2024-07-15 14:10:42.000405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.751 ms 00:29:17.478 [2024-07-15 14:10:42.000416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.478 [2024-07-15 14:10:42.000455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.478 [2024-07-15 14:10:42.000472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:17.478 [2024-07-15 14:10:42.000484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:17.478 [2024-07-15 14:10:42.000495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.478 [2024-07-15 14:10:42.000535] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:17.478 [2024-07-15 14:10:42.000551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.478 [2024-07-15 14:10:42.000566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:17.478 [2024-07-15 14:10:42.000578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:29:17.478 [2024-07-15 14:10:42.000589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.736 [2024-07-15 14:10:42.034089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.736 [2024-07-15 14:10:42.034132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:17.736 [2024-07-15 14:10:42.034150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.476 ms 00:29:17.736 [2024-07-15 14:10:42.034162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.736 [2024-07-15 14:10:42.034250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.736 [2024-07-15 14:10:42.034269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:17.736 [2024-07-15 14:10:42.034280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:17.736 [2024-07-15 14:10:42.034291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.736 [2024-07-15 14:10:42.042080] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 357.347 ms, result 0 00:29:54.400  Copying: 804/1048576 [kB] (804 kBps) Copying: 3892/1048576 [kB] (3088 kBps) Copying: 20/1024 [MB] (17 MBps) Copying: 48/1024 [MB] (28 MBps) Copying: 79/1024 [MB] (30 MBps) Copying: 110/1024 [MB] (31 MBps) Copying: 141/1024 [MB] (31 MBps) Copying: 171/1024 [MB] (30 MBps) Copying: 202/1024 [MB] (31 MBps) Copying: 233/1024 [MB] (30 MBps) Copying: 262/1024 [MB] (28 MBps) Copying: 293/1024 [MB] (31 MBps) Copying: 322/1024 [MB] (29 MBps) Copying: 353/1024 [MB] (30 MBps) Copying: 383/1024 [MB] (30 MBps) Copying: 409/1024 [MB] (26 MBps) Copying: 439/1024 [MB] (29 MBps) Copying: 469/1024 [MB] (29 MBps) Copying: 498/1024 [MB] (29 MBps) Copying: 527/1024 [MB] (28 MBps) Copying: 557/1024 [MB] (29 MBps) Copying: 587/1024 [MB] (29 MBps) Copying: 617/1024 [MB] (29 MBps) Copying: 646/1024 [MB] (28 MBps) Copying: 676/1024 [MB] (30 MBps) Copying: 704/1024 [MB] (28 MBps) Copying: 733/1024 [MB] (28 MBps) Copying: 763/1024 [MB] (30 MBps) Copying: 793/1024 [MB] (29 MBps) Copying: 823/1024 [MB] (30 MBps) Copying: 854/1024 [MB] (30 MBps) Copying: 884/1024 [MB] (29 MBps) Copying: 914/1024 [MB] (29 MBps) Copying: 945/1024 [MB] (31 MBps) Copying: 978/1024 [MB] (32 MBps) Copying: 1010/1024 [MB] (32 MBps) Copying: 1024/1024 [MB] (average 28 MBps)[2024-07-15 14:11:18.797321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.400 [2024-07-15 14:11:18.797425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:54.400 [2024-07-15 14:11:18.797456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:54.400 [2024-07-15 14:11:18.797484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.400 [2024-07-15 14:11:18.797556] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:54.400 [2024-07-15 14:11:18.802000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.400 [2024-07-15 14:11:18.802051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:54.400 [2024-07-15 14:11:18.802074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.415 ms 00:29:54.400 [2024-07-15 14:11:18.802090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.400 [2024-07-15 14:11:18.802443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.400 [2024-07-15 14:11:18.802470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:54.400 [2024-07-15 14:11:18.802486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:29:54.400 [2024-07-15 14:11:18.802510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.400 [2024-07-15 14:11:18.815489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.400 [2024-07-15 14:11:18.815546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:54.400 [2024-07-15 14:11:18.815569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.951 ms 00:29:54.400 [2024-07-15 14:11:18.815584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.400 [2024-07-15 14:11:18.824142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.400 [2024-07-15 14:11:18.824215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:54.400 [2024-07-15 14:11:18.824238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.510 ms 00:29:54.400 [2024-07-15 14:11:18.824252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.400 [2024-07-15 14:11:18.862859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.400 [2024-07-15 14:11:18.862914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:54.400 [2024-07-15 14:11:18.862937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.489 ms 00:29:54.400 [2024-07-15 14:11:18.862951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.400 [2024-07-15 14:11:18.884287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.400 [2024-07-15 14:11:18.884350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:54.400 [2024-07-15 14:11:18.884373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.282 ms 00:29:54.400 [2024-07-15 14:11:18.884388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.400 [2024-07-15 14:11:18.888145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.400 [2024-07-15 14:11:18.888200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:54.400 [2024-07-15 14:11:18.888221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.699 ms 00:29:54.400 [2024-07-15 14:11:18.888236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.400 [2024-07-15 14:11:18.927537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.400 [2024-07-15 14:11:18.927597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:29:54.400 [2024-07-15 14:11:18.927620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.263 ms 00:29:54.400 [2024-07-15 14:11:18.927635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.660 [2024-07-15 14:11:18.969955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.660 [2024-07-15 14:11:18.970035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:29:54.660 [2024-07-15 14:11:18.970059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.262 ms 00:29:54.660 [2024-07-15 14:11:18.970080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.660 [2024-07-15 14:11:19.011935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.660 [2024-07-15 14:11:19.011994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:54.660 [2024-07-15 14:11:19.012017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.782 ms 00:29:54.660 [2024-07-15 14:11:19.012048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.660 [2024-07-15 14:11:19.051794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.660 [2024-07-15 14:11:19.051851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:54.660 [2024-07-15 14:11:19.051873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.615 ms 00:29:54.660 [2024-07-15 14:11:19.051887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.660 [2024-07-15 14:11:19.051942] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:54.660 [2024-07-15 14:11:19.051970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:54.660 [2024-07-15 14:11:19.051989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:29:54.660 [2024-07-15 14:11:19.052004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:54.660 [2024-07-15 14:11:19.052943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.052957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.052971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.052985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.052999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.053013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.053026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.053040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.053054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.053068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.053081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.053095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.053109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.053123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.053137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.053151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.053164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.053178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.053192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.053207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.053221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.053235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.053248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.053262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.053276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.053290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.053321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.053344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.053358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.053373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.053387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.053401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.053415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.053429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:54.661 [2024-07-15 14:11:19.053454] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:54.661 [2024-07-15 14:11:19.053468] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f654a337-809f-45b2-9ca4-a55998feb384 00:29:54.661 [2024-07-15 14:11:19.053483] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:29:54.661 [2024-07-15 14:11:19.053495] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 136384 00:29:54.661 [2024-07-15 14:11:19.053516] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 134400 00:29:54.661 [2024-07-15 14:11:19.053531] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0148 00:29:54.661 [2024-07-15 14:11:19.053544] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:54.661 [2024-07-15 14:11:19.053561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:54.661 [2024-07-15 14:11:19.053574] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:54.661 [2024-07-15 14:11:19.053586] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:54.661 [2024-07-15 14:11:19.053597] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:54.661 [2024-07-15 14:11:19.053611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.661 [2024-07-15 14:11:19.053626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:54.661 [2024-07-15 14:11:19.053639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.671 ms 00:29:54.661 [2024-07-15 14:11:19.053653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.661 [2024-07-15 14:11:19.071776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.661 [2024-07-15 14:11:19.071819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:54.661 [2024-07-15 14:11:19.071838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.070 ms 00:29:54.661 [2024-07-15 14:11:19.071867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.661 [2024-07-15 14:11:19.072296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.661 [2024-07-15 14:11:19.072336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:54.661 [2024-07-15 14:11:19.072351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:29:54.661 [2024-07-15 14:11:19.072362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.661 [2024-07-15 14:11:19.109037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.661 [2024-07-15 14:11:19.109095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:54.661 [2024-07-15 14:11:19.109113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.661 [2024-07-15 14:11:19.109125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.661 [2024-07-15 14:11:19.109195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.661 [2024-07-15 14:11:19.109211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:54.661 [2024-07-15 14:11:19.109223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.661 [2024-07-15 14:11:19.109234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.661 [2024-07-15 14:11:19.109342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.661 [2024-07-15 14:11:19.109378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:54.661 [2024-07-15 14:11:19.109398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.661 [2024-07-15 14:11:19.109409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.661 [2024-07-15 14:11:19.109433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.661 [2024-07-15 14:11:19.109447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:54.661 [2024-07-15 14:11:19.109459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.661 [2024-07-15 14:11:19.109470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.920 [2024-07-15 14:11:19.207496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.920 [2024-07-15 14:11:19.207579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:54.920 [2024-07-15 14:11:19.207611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.920 [2024-07-15 14:11:19.207624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.920 [2024-07-15 14:11:19.292072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.920 [2024-07-15 14:11:19.292140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:54.920 [2024-07-15 14:11:19.292161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.920 [2024-07-15 14:11:19.292174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.920 [2024-07-15 14:11:19.292260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.920 [2024-07-15 14:11:19.292277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:54.920 [2024-07-15 14:11:19.292290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.920 [2024-07-15 14:11:19.292333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.920 [2024-07-15 14:11:19.292384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.920 [2024-07-15 14:11:19.292399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:54.920 [2024-07-15 14:11:19.292411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.920 [2024-07-15 14:11:19.292423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.920 [2024-07-15 14:11:19.292541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.920 [2024-07-15 14:11:19.292561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:54.920 [2024-07-15 14:11:19.292573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.920 [2024-07-15 14:11:19.292584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.920 [2024-07-15 14:11:19.292642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.920 [2024-07-15 14:11:19.292660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:54.920 [2024-07-15 14:11:19.292672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.920 [2024-07-15 14:11:19.292683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.920 [2024-07-15 14:11:19.292728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.920 [2024-07-15 14:11:19.292743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:54.920 [2024-07-15 14:11:19.292754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.920 [2024-07-15 14:11:19.292766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.920 [2024-07-15 14:11:19.292822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.920 [2024-07-15 14:11:19.292839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:54.920 [2024-07-15 14:11:19.292851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.920 [2024-07-15 14:11:19.292862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.920 [2024-07-15 14:11:19.292999] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 495.672 ms, result 0 00:29:55.855 00:29:55.855 00:29:55.855 14:11:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:58.385 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:58.385 14:11:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:58.385 [2024-07-15 14:11:22.757467] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:29:58.385 [2024-07-15 14:11:22.757620] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85455 ] 00:29:58.385 [2024-07-15 14:11:22.923540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.642 [2024-07-15 14:11:23.166267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.213 [2024-07-15 14:11:23.581156] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:59.213 [2024-07-15 14:11:23.581251] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:59.213 [2024-07-15 14:11:23.749564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.213 [2024-07-15 14:11:23.749651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:59.213 [2024-07-15 14:11:23.749677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:59.213 [2024-07-15 14:11:23.749692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.213 [2024-07-15 14:11:23.749787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.213 [2024-07-15 14:11:23.749813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:59.213 [2024-07-15 14:11:23.749828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:29:59.213 [2024-07-15 14:11:23.749847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.213 [2024-07-15 14:11:23.749887] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:59.213 [2024-07-15 14:11:23.751064] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:59.213 [2024-07-15 14:11:23.751115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.213 [2024-07-15 14:11:23.751137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:59.213 [2024-07-15 14:11:23.751152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.237 ms 00:29:59.213 [2024-07-15 14:11:23.751166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.213 [2024-07-15 14:11:23.752532] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:59.472 [2024-07-15 14:11:23.772708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.472 [2024-07-15 14:11:23.772781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:59.472 [2024-07-15 14:11:23.772804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.177 ms 00:29:59.472 [2024-07-15 14:11:23.772819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.472 [2024-07-15 14:11:23.772923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.472 [2024-07-15 14:11:23.772949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:59.472 [2024-07-15 14:11:23.772970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:59.472 [2024-07-15 14:11:23.772984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.472 [2024-07-15 14:11:23.778036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.472 [2024-07-15 14:11:23.778093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:59.472 [2024-07-15 14:11:23.778112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.926 ms 00:29:59.472 [2024-07-15 14:11:23.778126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.472 [2024-07-15 14:11:23.778250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.472 [2024-07-15 14:11:23.778276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:59.472 [2024-07-15 14:11:23.778293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:29:59.472 [2024-07-15 14:11:23.778330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.472 [2024-07-15 14:11:23.778422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.472 [2024-07-15 14:11:23.778455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:59.472 [2024-07-15 14:11:23.778472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:29:59.472 [2024-07-15 14:11:23.778486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.472 [2024-07-15 14:11:23.778531] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:59.472 [2024-07-15 14:11:23.783714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.472 [2024-07-15 14:11:23.783760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:59.472 [2024-07-15 14:11:23.783778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.194 ms 00:29:59.472 [2024-07-15 14:11:23.783792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.472 [2024-07-15 14:11:23.783859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.472 [2024-07-15 14:11:23.783881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:59.472 [2024-07-15 14:11:23.783896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:29:59.472 [2024-07-15 14:11:23.783910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.472 [2024-07-15 14:11:23.783992] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:59.472 [2024-07-15 14:11:23.784044] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:59.472 [2024-07-15 14:11:23.784107] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:59.472 [2024-07-15 14:11:23.784137] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:29:59.472 [2024-07-15 14:11:23.784268] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:59.472 [2024-07-15 14:11:23.784296] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:59.472 [2024-07-15 14:11:23.784340] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:59.472 [2024-07-15 14:11:23.784359] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:59.472 [2024-07-15 14:11:23.784375] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:59.472 [2024-07-15 14:11:23.784391] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:59.472 [2024-07-15 14:11:23.784404] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:59.472 [2024-07-15 14:11:23.784417] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:59.472 [2024-07-15 14:11:23.784430] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:59.472 [2024-07-15 14:11:23.784445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.472 [2024-07-15 14:11:23.784464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:59.472 [2024-07-15 14:11:23.784479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:29:59.472 [2024-07-15 14:11:23.784492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.472 [2024-07-15 14:11:23.784612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.472 [2024-07-15 14:11:23.784641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:59.472 [2024-07-15 14:11:23.784657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:29:59.472 [2024-07-15 14:11:23.784671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.472 [2024-07-15 14:11:23.784804] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:59.472 [2024-07-15 14:11:23.784833] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:59.472 [2024-07-15 14:11:23.784857] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:59.472 [2024-07-15 14:11:23.784871] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.472 [2024-07-15 14:11:23.784886] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:59.472 [2024-07-15 14:11:23.784899] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:59.472 [2024-07-15 14:11:23.784911] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:59.472 [2024-07-15 14:11:23.784924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:59.472 [2024-07-15 14:11:23.784936] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:59.472 [2024-07-15 14:11:23.784949] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:59.472 [2024-07-15 14:11:23.784962] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:59.472 [2024-07-15 14:11:23.784974] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:59.472 [2024-07-15 14:11:23.784987] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:59.472 [2024-07-15 14:11:23.784999] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:59.472 [2024-07-15 14:11:23.785012] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:59.472 [2024-07-15 14:11:23.785025] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.472 [2024-07-15 14:11:23.785037] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:59.472 [2024-07-15 14:11:23.785049] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:59.472 [2024-07-15 14:11:23.785062] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.472 [2024-07-15 14:11:23.785075] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:59.472 [2024-07-15 14:11:23.785112] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:59.472 [2024-07-15 14:11:23.785125] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:59.472 [2024-07-15 14:11:23.785138] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:59.472 [2024-07-15 14:11:23.785151] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:59.472 [2024-07-15 14:11:23.785164] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:59.472 [2024-07-15 14:11:23.785176] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:59.472 [2024-07-15 14:11:23.785193] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:59.472 [2024-07-15 14:11:23.785205] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:59.472 [2024-07-15 14:11:23.785217] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:59.472 [2024-07-15 14:11:23.785230] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:59.472 [2024-07-15 14:11:23.785243] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:59.473 [2024-07-15 14:11:23.785255] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:59.473 [2024-07-15 14:11:23.785268] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:59.473 [2024-07-15 14:11:23.785280] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:59.473 [2024-07-15 14:11:23.785292] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:59.473 [2024-07-15 14:11:23.785332] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:59.473 [2024-07-15 14:11:23.785348] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:59.473 [2024-07-15 14:11:23.785361] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:59.473 [2024-07-15 14:11:23.785373] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:59.473 [2024-07-15 14:11:23.785386] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.473 [2024-07-15 14:11:23.785398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:59.473 [2024-07-15 14:11:23.785411] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:59.473 [2024-07-15 14:11:23.785423] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.473 [2024-07-15 14:11:23.785436] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:59.473 [2024-07-15 14:11:23.785449] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:59.473 [2024-07-15 14:11:23.785462] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:59.473 [2024-07-15 14:11:23.785475] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.473 [2024-07-15 14:11:23.785489] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:59.473 [2024-07-15 14:11:23.785502] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:59.473 [2024-07-15 14:11:23.785515] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:59.473 [2024-07-15 14:11:23.785527] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:59.473 [2024-07-15 14:11:23.785543] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:59.473 [2024-07-15 14:11:23.785556] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:59.473 [2024-07-15 14:11:23.785571] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:59.473 [2024-07-15 14:11:23.785588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:59.473 [2024-07-15 14:11:23.785604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:59.473 [2024-07-15 14:11:23.785618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:59.473 [2024-07-15 14:11:23.785632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:59.473 [2024-07-15 14:11:23.785646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:59.473 [2024-07-15 14:11:23.785659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:59.473 [2024-07-15 14:11:23.785673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:59.473 [2024-07-15 14:11:23.785687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:59.473 [2024-07-15 14:11:23.785701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:59.473 [2024-07-15 14:11:23.785715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:59.473 [2024-07-15 14:11:23.785729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:59.473 [2024-07-15 14:11:23.785744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:59.473 [2024-07-15 14:11:23.785757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:59.473 [2024-07-15 14:11:23.785771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:59.473 [2024-07-15 14:11:23.785785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:59.473 [2024-07-15 14:11:23.785799] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:59.473 [2024-07-15 14:11:23.785814] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:59.473 [2024-07-15 14:11:23.785829] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:59.473 [2024-07-15 14:11:23.785843] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:59.473 [2024-07-15 14:11:23.785857] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:59.473 [2024-07-15 14:11:23.785871] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:59.473 [2024-07-15 14:11:23.785885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.473 [2024-07-15 14:11:23.785906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:59.473 [2024-07-15 14:11:23.785921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.163 ms 00:29:59.473 [2024-07-15 14:11:23.785934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.473 [2024-07-15 14:11:23.832770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.473 [2024-07-15 14:11:23.832840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:59.473 [2024-07-15 14:11:23.832861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.757 ms 00:29:59.473 [2024-07-15 14:11:23.832874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.473 [2024-07-15 14:11:23.832999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.473 [2024-07-15 14:11:23.833017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:59.473 [2024-07-15 14:11:23.833030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:29:59.473 [2024-07-15 14:11:23.833048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.473 [2024-07-15 14:11:23.872071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.473 [2024-07-15 14:11:23.872137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:59.473 [2024-07-15 14:11:23.872158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.924 ms 00:29:59.473 [2024-07-15 14:11:23.872170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.473 [2024-07-15 14:11:23.872246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.473 [2024-07-15 14:11:23.872264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:59.473 [2024-07-15 14:11:23.872277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:59.473 [2024-07-15 14:11:23.872289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.473 [2024-07-15 14:11:23.872700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.473 [2024-07-15 14:11:23.872731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:59.473 [2024-07-15 14:11:23.872746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:29:59.473 [2024-07-15 14:11:23.872757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.473 [2024-07-15 14:11:23.872914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.473 [2024-07-15 14:11:23.872949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:59.473 [2024-07-15 14:11:23.872963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:29:59.473 [2024-07-15 14:11:23.872974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.473 [2024-07-15 14:11:23.889516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.473 [2024-07-15 14:11:23.889575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:59.473 [2024-07-15 14:11:23.889594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.509 ms 00:29:59.473 [2024-07-15 14:11:23.889607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.473 [2024-07-15 14:11:23.907937] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:59.473 [2024-07-15 14:11:23.908021] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:59.473 [2024-07-15 14:11:23.908045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.473 [2024-07-15 14:11:23.908059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:59.473 [2024-07-15 14:11:23.908076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.277 ms 00:29:59.473 [2024-07-15 14:11:23.908088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.473 [2024-07-15 14:11:23.938789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.474 [2024-07-15 14:11:23.938887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:59.474 [2024-07-15 14:11:23.938909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.601 ms 00:29:59.474 [2024-07-15 14:11:23.938937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.474 [2024-07-15 14:11:23.955286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.474 [2024-07-15 14:11:23.955357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:59.474 [2024-07-15 14:11:23.955377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.247 ms 00:29:59.474 [2024-07-15 14:11:23.955389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.474 [2024-07-15 14:11:23.971010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.474 [2024-07-15 14:11:23.971056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:59.474 [2024-07-15 14:11:23.971074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.561 ms 00:29:59.474 [2024-07-15 14:11:23.971085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.474 [2024-07-15 14:11:23.971978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.474 [2024-07-15 14:11:23.972017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:59.474 [2024-07-15 14:11:23.972034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:29:59.474 [2024-07-15 14:11:23.972046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.732 [2024-07-15 14:11:24.045453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.732 [2024-07-15 14:11:24.045548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:59.732 [2024-07-15 14:11:24.045573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.379 ms 00:29:59.732 [2024-07-15 14:11:24.045586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.732 [2024-07-15 14:11:24.058336] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:59.732 [2024-07-15 14:11:24.061076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.732 [2024-07-15 14:11:24.061115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:59.732 [2024-07-15 14:11:24.061133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.405 ms 00:29:59.732 [2024-07-15 14:11:24.061145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.732 [2024-07-15 14:11:24.061267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.732 [2024-07-15 14:11:24.061288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:59.732 [2024-07-15 14:11:24.061317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:59.732 [2024-07-15 14:11:24.061332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.732 [2024-07-15 14:11:24.062010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.732 [2024-07-15 14:11:24.062054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:59.732 [2024-07-15 14:11:24.062070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.619 ms 00:29:59.732 [2024-07-15 14:11:24.062082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.732 [2024-07-15 14:11:24.062121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.732 [2024-07-15 14:11:24.062137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:59.732 [2024-07-15 14:11:24.062150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:59.732 [2024-07-15 14:11:24.062161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.732 [2024-07-15 14:11:24.062203] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:59.732 [2024-07-15 14:11:24.062221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.732 [2024-07-15 14:11:24.062233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:59.732 [2024-07-15 14:11:24.062250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:29:59.732 [2024-07-15 14:11:24.062261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.732 [2024-07-15 14:11:24.093414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.732 [2024-07-15 14:11:24.093464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:59.732 [2024-07-15 14:11:24.093484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.127 ms 00:29:59.732 [2024-07-15 14:11:24.093497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.732 [2024-07-15 14:11:24.093658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.732 [2024-07-15 14:11:24.093692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:59.732 [2024-07-15 14:11:24.093706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:29:59.732 [2024-07-15 14:11:24.093718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.732 [2024-07-15 14:11:24.094909] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 344.831 ms, result 0 00:30:38.582  Copying: 27/1024 [MB] (27 MBps) Copying: 55/1024 [MB] (28 MBps) Copying: 85/1024 [MB] (29 MBps) Copying: 113/1024 [MB] (28 MBps) Copying: 141/1024 [MB] (28 MBps) Copying: 168/1024 [MB] (26 MBps) Copying: 196/1024 [MB] (28 MBps) Copying: 223/1024 [MB] (26 MBps) Copying: 250/1024 [MB] (27 MBps) Copying: 275/1024 [MB] (24 MBps) Copying: 303/1024 [MB] (28 MBps) Copying: 330/1024 [MB] (27 MBps) Copying: 359/1024 [MB] (28 MBps) Copying: 387/1024 [MB] (28 MBps) Copying: 414/1024 [MB] (27 MBps) Copying: 443/1024 [MB] (28 MBps) Copying: 468/1024 [MB] (25 MBps) Copying: 495/1024 [MB] (27 MBps) Copying: 521/1024 [MB] (26 MBps) Copying: 549/1024 [MB] (27 MBps) Copying: 575/1024 [MB] (25 MBps) Copying: 602/1024 [MB] (26 MBps) Copying: 628/1024 [MB] (26 MBps) Copying: 654/1024 [MB] (25 MBps) Copying: 680/1024 [MB] (25 MBps) Copying: 705/1024 [MB] (25 MBps) Copying: 731/1024 [MB] (25 MBps) Copying: 757/1024 [MB] (25 MBps) Copying: 780/1024 [MB] (23 MBps) Copying: 804/1024 [MB] (23 MBps) Copying: 828/1024 [MB] (24 MBps) Copying: 852/1024 [MB] (23 MBps) Copying: 876/1024 [MB] (24 MBps) Copying: 903/1024 [MB] (26 MBps) Copying: 930/1024 [MB] (27 MBps) Copying: 957/1024 [MB] (27 MBps) Copying: 983/1024 [MB] (25 MBps) Copying: 1009/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-15 14:12:02.882367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.582 [2024-07-15 14:12:02.882460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:38.582 [2024-07-15 14:12:02.882489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:38.582 [2024-07-15 14:12:02.882506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.582 [2024-07-15 14:12:02.882547] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:38.582 [2024-07-15 14:12:02.887465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.582 [2024-07-15 14:12:02.887515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:38.582 [2024-07-15 14:12:02.887534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.887 ms 00:30:38.582 [2024-07-15 14:12:02.887548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.582 [2024-07-15 14:12:02.887849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.583 [2024-07-15 14:12:02.887901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:38.583 [2024-07-15 14:12:02.887929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:30:38.583 [2024-07-15 14:12:02.887944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.583 [2024-07-15 14:12:02.892992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.583 [2024-07-15 14:12:02.893045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:38.583 [2024-07-15 14:12:02.893065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.020 ms 00:30:38.583 [2024-07-15 14:12:02.893080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.583 [2024-07-15 14:12:02.901701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.583 [2024-07-15 14:12:02.901752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:38.583 [2024-07-15 14:12:02.901782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.587 ms 00:30:38.583 [2024-07-15 14:12:02.901806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.583 [2024-07-15 14:12:02.940828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.583 [2024-07-15 14:12:02.940913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:38.583 [2024-07-15 14:12:02.940938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.908 ms 00:30:38.583 [2024-07-15 14:12:02.940952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.583 [2024-07-15 14:12:02.963006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.583 [2024-07-15 14:12:02.963089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:38.583 [2024-07-15 14:12:02.963113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.974 ms 00:30:38.583 [2024-07-15 14:12:02.963129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.583 [2024-07-15 14:12:02.966433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.583 [2024-07-15 14:12:02.966488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:38.583 [2024-07-15 14:12:02.966509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.218 ms 00:30:38.583 [2024-07-15 14:12:02.966534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.583 [2024-07-15 14:12:03.005643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.583 [2024-07-15 14:12:03.005722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:30:38.583 [2024-07-15 14:12:03.005745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.077 ms 00:30:38.583 [2024-07-15 14:12:03.005760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.583 [2024-07-15 14:12:03.044651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.583 [2024-07-15 14:12:03.044729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:30:38.583 [2024-07-15 14:12:03.044751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.812 ms 00:30:38.583 [2024-07-15 14:12:03.044765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.583 [2024-07-15 14:12:03.083337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.583 [2024-07-15 14:12:03.083419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:38.583 [2024-07-15 14:12:03.083465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.492 ms 00:30:38.583 [2024-07-15 14:12:03.083480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.583 [2024-07-15 14:12:03.121650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.583 [2024-07-15 14:12:03.121725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:38.583 [2024-07-15 14:12:03.121749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.010 ms 00:30:38.583 [2024-07-15 14:12:03.121763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.583 [2024-07-15 14:12:03.121828] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:38.583 [2024-07-15 14:12:03.121858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:38.583 [2024-07-15 14:12:03.121876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:30:38.583 [2024-07-15 14:12:03.121891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.121906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.121920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.121934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.121948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.121963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.121977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.121991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.122994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.123008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.123022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.123036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.123050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.123068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.123092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.123108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.123123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.123137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.123152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.123166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.123180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.123194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.123208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.123222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.123236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.123250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.123264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.123278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.123292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.123319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.123336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.123350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:38.583 [2024-07-15 14:12:03.123364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:38.584 [2024-07-15 14:12:03.123389] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:38.584 [2024-07-15 14:12:03.123404] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f654a337-809f-45b2-9ca4-a55998feb384 00:30:38.584 [2024-07-15 14:12:03.123418] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:30:38.584 [2024-07-15 14:12:03.123431] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:38.584 [2024-07-15 14:12:03.123453] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:38.584 [2024-07-15 14:12:03.123466] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:38.584 [2024-07-15 14:12:03.123479] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:38.584 [2024-07-15 14:12:03.123493] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:38.584 [2024-07-15 14:12:03.123506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:38.584 [2024-07-15 14:12:03.123518] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:38.584 [2024-07-15 14:12:03.123530] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:38.584 [2024-07-15 14:12:03.123544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.584 [2024-07-15 14:12:03.123558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:38.584 [2024-07-15 14:12:03.123573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.719 ms 00:30:38.584 [2024-07-15 14:12:03.123586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.842 [2024-07-15 14:12:03.143884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.842 [2024-07-15 14:12:03.143949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:38.842 [2024-07-15 14:12:03.143986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.231 ms 00:30:38.842 [2024-07-15 14:12:03.144000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.842 [2024-07-15 14:12:03.144590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.842 [2024-07-15 14:12:03.144631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:38.842 [2024-07-15 14:12:03.144650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:30:38.842 [2024-07-15 14:12:03.144664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.842 [2024-07-15 14:12:03.189940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:38.842 [2024-07-15 14:12:03.190016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:38.842 [2024-07-15 14:12:03.190039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:38.842 [2024-07-15 14:12:03.190053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.842 [2024-07-15 14:12:03.190145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:38.842 [2024-07-15 14:12:03.190164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:38.842 [2024-07-15 14:12:03.190179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:38.842 [2024-07-15 14:12:03.190194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.842 [2024-07-15 14:12:03.190355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:38.842 [2024-07-15 14:12:03.190380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:38.842 [2024-07-15 14:12:03.190395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:38.842 [2024-07-15 14:12:03.190408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.842 [2024-07-15 14:12:03.190436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:38.842 [2024-07-15 14:12:03.190454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:38.842 [2024-07-15 14:12:03.190468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:38.842 [2024-07-15 14:12:03.190481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.842 [2024-07-15 14:12:03.310854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:38.842 [2024-07-15 14:12:03.310937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:38.842 [2024-07-15 14:12:03.310960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:38.842 [2024-07-15 14:12:03.310975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.099 [2024-07-15 14:12:03.401646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.099 [2024-07-15 14:12:03.401717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:39.099 [2024-07-15 14:12:03.401737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.099 [2024-07-15 14:12:03.401749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.099 [2024-07-15 14:12:03.401837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.099 [2024-07-15 14:12:03.401865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:39.099 [2024-07-15 14:12:03.401877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.099 [2024-07-15 14:12:03.401889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.099 [2024-07-15 14:12:03.401935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.099 [2024-07-15 14:12:03.401951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:39.099 [2024-07-15 14:12:03.401964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.099 [2024-07-15 14:12:03.401975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.099 [2024-07-15 14:12:03.402100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.099 [2024-07-15 14:12:03.402136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:39.099 [2024-07-15 14:12:03.402150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.099 [2024-07-15 14:12:03.402162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.099 [2024-07-15 14:12:03.402213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.099 [2024-07-15 14:12:03.402231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:39.099 [2024-07-15 14:12:03.402243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.099 [2024-07-15 14:12:03.402254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.099 [2024-07-15 14:12:03.402299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.099 [2024-07-15 14:12:03.402349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:39.099 [2024-07-15 14:12:03.402370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.099 [2024-07-15 14:12:03.402381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.099 [2024-07-15 14:12:03.402437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.099 [2024-07-15 14:12:03.402454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:39.099 [2024-07-15 14:12:03.402466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.099 [2024-07-15 14:12:03.402477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.099 [2024-07-15 14:12:03.402634] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 520.264 ms, result 0 00:30:40.045 00:30:40.045 00:30:40.045 14:12:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:42.570 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:30:42.570 14:12:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:30:42.570 14:12:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:30:42.570 14:12:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:42.570 14:12:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:42.570 14:12:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:30:42.570 14:12:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:42.570 14:12:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:42.570 Process with pid 83640 is not found 00:30:42.570 14:12:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 83640 00:30:42.570 14:12:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@948 -- # '[' -z 83640 ']' 00:30:42.570 14:12:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # kill -0 83640 00:30:42.570 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (83640) - No such process 00:30:42.570 14:12:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@975 -- # echo 'Process with pid 83640 is not found' 00:30:42.570 14:12:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:30:42.829 14:12:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:30:42.829 Remove shared memory files 00:30:42.829 14:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:42.829 14:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:42.829 14:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:42.829 14:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:30:42.829 14:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:42.829 14:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:42.829 00:30:42.829 real 3m43.127s 00:30:42.829 user 4m15.548s 00:30:42.829 sys 0m37.614s 00:30:42.829 14:12:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:42.829 ************************************ 00:30:42.829 END TEST ftl_dirty_shutdown 00:30:42.829 14:12:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:42.829 ************************************ 00:30:42.829 14:12:07 ftl -- common/autotest_common.sh@1142 -- # return 0 00:30:42.829 14:12:07 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:42.829 14:12:07 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:30:42.829 14:12:07 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:42.829 14:12:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:42.829 ************************************ 00:30:42.829 START TEST ftl_upgrade_shutdown 00:30:42.829 ************************************ 00:30:42.829 14:12:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:43.087 * Looking for test storage... 00:30:43.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:43.087 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:43.087 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:30:43.087 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:43.087 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:43.087 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:43.087 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:43.087 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:43.087 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:43.087 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:43.087 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:43.087 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:43.087 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:43.087 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:43.087 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:43.087 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:43.087 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:43.087 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:43.087 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:43.087 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:43.087 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:43.087 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85949 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85949 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 85949 ']' 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:43.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:43.088 14:12:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:43.088 [2024-07-15 14:12:07.589379] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:30:43.088 [2024-07-15 14:12:07.589561] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85949 ] 00:30:43.346 [2024-07-15 14:12:07.760081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.604 [2024-07-15 14:12:07.951535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.171 14:12:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:44.171 14:12:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:30:44.171 14:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:44.171 14:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:30:44.171 14:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:30:44.171 14:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:44.171 14:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:30:44.171 14:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:44.171 14:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:30:44.171 14:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:44.171 14:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:30:44.171 14:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:44.171 14:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:30:44.171 14:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:44.171 14:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:30:44.171 14:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:44.171 14:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:30:44.171 14:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:30:44.171 14:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:30:44.171 14:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:44.171 14:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:30:44.171 14:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:30:44.171 14:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:30:44.738 14:12:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:30:44.738 14:12:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:30:44.738 14:12:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:30:44.738 14:12:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:30:44.738 14:12:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:44.738 14:12:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:30:44.738 14:12:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:30:44.738 14:12:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:30:44.738 14:12:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:44.738 { 00:30:44.738 "name": "basen1", 00:30:44.738 "aliases": [ 00:30:44.738 "b1a0d441-91c0-474b-a7ad-8670059d8c9b" 00:30:44.738 ], 00:30:44.738 "product_name": "NVMe disk", 00:30:44.738 "block_size": 4096, 00:30:44.738 "num_blocks": 1310720, 00:30:44.738 "uuid": "b1a0d441-91c0-474b-a7ad-8670059d8c9b", 00:30:44.738 "assigned_rate_limits": { 00:30:44.738 "rw_ios_per_sec": 0, 00:30:44.738 "rw_mbytes_per_sec": 0, 00:30:44.738 "r_mbytes_per_sec": 0, 00:30:44.738 "w_mbytes_per_sec": 0 00:30:44.738 }, 00:30:44.738 "claimed": true, 00:30:44.738 "claim_type": "read_many_write_one", 00:30:44.738 "zoned": false, 00:30:44.738 "supported_io_types": { 00:30:44.738 "read": true, 00:30:44.738 "write": true, 00:30:44.738 "unmap": true, 00:30:44.738 "flush": true, 00:30:44.738 "reset": true, 00:30:44.738 "nvme_admin": true, 00:30:44.738 "nvme_io": true, 00:30:44.738 "nvme_io_md": false, 00:30:44.738 "write_zeroes": true, 00:30:44.738 "zcopy": false, 00:30:44.738 "get_zone_info": false, 00:30:44.738 "zone_management": false, 00:30:44.738 "zone_append": false, 00:30:44.738 "compare": true, 00:30:44.738 "compare_and_write": false, 00:30:44.738 "abort": true, 00:30:44.738 "seek_hole": false, 00:30:44.738 "seek_data": false, 00:30:44.738 "copy": true, 00:30:44.738 "nvme_iov_md": false 00:30:44.738 }, 00:30:44.738 "driver_specific": { 00:30:44.738 "nvme": [ 00:30:44.738 { 00:30:44.738 "pci_address": "0000:00:11.0", 00:30:44.738 "trid": { 00:30:44.738 "trtype": "PCIe", 00:30:44.738 "traddr": "0000:00:11.0" 00:30:44.738 }, 00:30:44.738 "ctrlr_data": { 00:30:44.738 "cntlid": 0, 00:30:44.738 "vendor_id": "0x1b36", 00:30:44.738 "model_number": "QEMU NVMe Ctrl", 00:30:44.738 "serial_number": "12341", 00:30:44.738 "firmware_revision": "8.0.0", 00:30:44.738 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:44.738 "oacs": { 00:30:44.738 "security": 0, 00:30:44.738 "format": 1, 00:30:44.738 "firmware": 0, 00:30:44.738 "ns_manage": 1 00:30:44.738 }, 00:30:44.738 "multi_ctrlr": false, 00:30:44.738 "ana_reporting": false 00:30:44.738 }, 00:30:44.738 "vs": { 00:30:44.738 "nvme_version": "1.4" 00:30:44.738 }, 00:30:44.738 "ns_data": { 00:30:44.738 "id": 1, 00:30:44.738 "can_share": false 00:30:44.738 } 00:30:44.738 } 00:30:44.738 ], 00:30:44.738 "mp_policy": "active_passive" 00:30:44.738 } 00:30:44.738 } 00:30:44.738 ]' 00:30:44.738 14:12:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:44.997 14:12:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:30:44.997 14:12:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:44.997 14:12:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:30:44.997 14:12:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:30:44.997 14:12:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:30:44.997 14:12:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:30:44.997 14:12:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:30:44.997 14:12:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:30:44.997 14:12:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:44.997 14:12:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:45.255 14:12:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=58c23042-148c-4509-8c7b-00059463fce8 00:30:45.255 14:12:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:30:45.255 14:12:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 58c23042-148c-4509-8c7b-00059463fce8 00:30:45.514 14:12:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:30:45.773 14:12:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=23b3a468-cf0d-4486-a90f-3c5465f137e1 00:30:45.773 14:12:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 23b3a468-cf0d-4486-a90f-3c5465f137e1 00:30:46.032 14:12:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=90e0080b-e587-4836-8f80-a7c52799ac8f 00:30:46.032 14:12:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 90e0080b-e587-4836-8f80-a7c52799ac8f ]] 00:30:46.032 14:12:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 90e0080b-e587-4836-8f80-a7c52799ac8f 5120 00:30:46.032 14:12:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:30:46.032 14:12:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:46.032 14:12:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=90e0080b-e587-4836-8f80-a7c52799ac8f 00:30:46.032 14:12:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:30:46.032 14:12:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 90e0080b-e587-4836-8f80-a7c52799ac8f 00:30:46.032 14:12:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=90e0080b-e587-4836-8f80-a7c52799ac8f 00:30:46.032 14:12:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:46.032 14:12:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:30:46.032 14:12:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:30:46.290 14:12:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 90e0080b-e587-4836-8f80-a7c52799ac8f 00:30:46.548 14:12:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:46.548 { 00:30:46.548 "name": "90e0080b-e587-4836-8f80-a7c52799ac8f", 00:30:46.548 "aliases": [ 00:30:46.548 "lvs/basen1p0" 00:30:46.548 ], 00:30:46.548 "product_name": "Logical Volume", 00:30:46.548 "block_size": 4096, 00:30:46.548 "num_blocks": 5242880, 00:30:46.548 "uuid": "90e0080b-e587-4836-8f80-a7c52799ac8f", 00:30:46.548 "assigned_rate_limits": { 00:30:46.548 "rw_ios_per_sec": 0, 00:30:46.548 "rw_mbytes_per_sec": 0, 00:30:46.548 "r_mbytes_per_sec": 0, 00:30:46.548 "w_mbytes_per_sec": 0 00:30:46.548 }, 00:30:46.548 "claimed": false, 00:30:46.548 "zoned": false, 00:30:46.548 "supported_io_types": { 00:30:46.548 "read": true, 00:30:46.548 "write": true, 00:30:46.548 "unmap": true, 00:30:46.548 "flush": false, 00:30:46.548 "reset": true, 00:30:46.548 "nvme_admin": false, 00:30:46.548 "nvme_io": false, 00:30:46.548 "nvme_io_md": false, 00:30:46.548 "write_zeroes": true, 00:30:46.548 "zcopy": false, 00:30:46.548 "get_zone_info": false, 00:30:46.548 "zone_management": false, 00:30:46.548 "zone_append": false, 00:30:46.548 "compare": false, 00:30:46.548 "compare_and_write": false, 00:30:46.548 "abort": false, 00:30:46.548 "seek_hole": true, 00:30:46.548 "seek_data": true, 00:30:46.548 "copy": false, 00:30:46.548 "nvme_iov_md": false 00:30:46.548 }, 00:30:46.548 "driver_specific": { 00:30:46.548 "lvol": { 00:30:46.548 "lvol_store_uuid": "23b3a468-cf0d-4486-a90f-3c5465f137e1", 00:30:46.548 "base_bdev": "basen1", 00:30:46.548 "thin_provision": true, 00:30:46.548 "num_allocated_clusters": 0, 00:30:46.548 "snapshot": false, 00:30:46.548 "clone": false, 00:30:46.548 "esnap_clone": false 00:30:46.548 } 00:30:46.548 } 00:30:46.548 } 00:30:46.548 ]' 00:30:46.548 14:12:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:46.548 14:12:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:30:46.548 14:12:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:46.548 14:12:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:30:46.548 14:12:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:30:46.548 14:12:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:30:46.548 14:12:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:30:46.548 14:12:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:30:46.548 14:12:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:30:46.806 14:12:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:30:46.806 14:12:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:30:46.806 14:12:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:30:47.065 14:12:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:30:47.065 14:12:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:30:47.065 14:12:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 90e0080b-e587-4836-8f80-a7c52799ac8f -c cachen1p0 --l2p_dram_limit 2 00:30:47.327 [2024-07-15 14:12:11.806223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.327 [2024-07-15 14:12:11.806296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:47.327 [2024-07-15 14:12:11.806337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:47.327 [2024-07-15 14:12:11.806354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.327 [2024-07-15 14:12:11.806438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.327 [2024-07-15 14:12:11.806460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:47.327 [2024-07-15 14:12:11.806475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:30:47.327 [2024-07-15 14:12:11.806489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.327 [2024-07-15 14:12:11.806519] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:47.327 [2024-07-15 14:12:11.807524] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:47.327 [2024-07-15 14:12:11.807563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.327 [2024-07-15 14:12:11.807583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:47.327 [2024-07-15 14:12:11.807597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.052 ms 00:30:47.327 [2024-07-15 14:12:11.807611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.327 [2024-07-15 14:12:11.807742] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 3e7932a1-a53c-4831-92f5-d4a8dbaa5201 00:30:47.327 [2024-07-15 14:12:11.808807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.327 [2024-07-15 14:12:11.808848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:30:47.327 [2024-07-15 14:12:11.808869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:30:47.327 [2024-07-15 14:12:11.808883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.327 [2024-07-15 14:12:11.813470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.327 [2024-07-15 14:12:11.813522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:47.327 [2024-07-15 14:12:11.813547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.505 ms 00:30:47.327 [2024-07-15 14:12:11.813560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.327 [2024-07-15 14:12:11.813630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.327 [2024-07-15 14:12:11.813650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:47.327 [2024-07-15 14:12:11.813665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:30:47.327 [2024-07-15 14:12:11.813678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.327 [2024-07-15 14:12:11.813771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.327 [2024-07-15 14:12:11.813789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:47.327 [2024-07-15 14:12:11.813805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:30:47.327 [2024-07-15 14:12:11.813820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.327 [2024-07-15 14:12:11.813857] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:47.327 [2024-07-15 14:12:11.818486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.327 [2024-07-15 14:12:11.818535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:47.327 [2024-07-15 14:12:11.818552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.642 ms 00:30:47.327 [2024-07-15 14:12:11.818566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.327 [2024-07-15 14:12:11.818606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.327 [2024-07-15 14:12:11.818636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:47.327 [2024-07-15 14:12:11.818652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:47.327 [2024-07-15 14:12:11.818666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.327 [2024-07-15 14:12:11.818725] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:30:47.327 [2024-07-15 14:12:11.818898] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:47.327 [2024-07-15 14:12:11.818918] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:47.327 [2024-07-15 14:12:11.818938] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:30:47.327 [2024-07-15 14:12:11.818955] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:47.327 [2024-07-15 14:12:11.818972] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:47.327 [2024-07-15 14:12:11.818985] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:47.327 [2024-07-15 14:12:11.818998] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:47.327 [2024-07-15 14:12:11.819026] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:47.327 [2024-07-15 14:12:11.819039] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:47.327 [2024-07-15 14:12:11.819053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.327 [2024-07-15 14:12:11.819066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:47.327 [2024-07-15 14:12:11.819080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.340 ms 00:30:47.327 [2024-07-15 14:12:11.819093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.327 [2024-07-15 14:12:11.819188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.327 [2024-07-15 14:12:11.819206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:47.327 [2024-07-15 14:12:11.819219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:30:47.327 [2024-07-15 14:12:11.819233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.327 [2024-07-15 14:12:11.819374] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:47.327 [2024-07-15 14:12:11.819409] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:47.327 [2024-07-15 14:12:11.819425] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:47.327 [2024-07-15 14:12:11.819440] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:47.327 [2024-07-15 14:12:11.819452] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:47.327 [2024-07-15 14:12:11.819466] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:47.328 [2024-07-15 14:12:11.819490] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:47.328 [2024-07-15 14:12:11.819505] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:47.328 [2024-07-15 14:12:11.819517] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:47.328 [2024-07-15 14:12:11.819530] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:47.328 [2024-07-15 14:12:11.819541] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:47.328 [2024-07-15 14:12:11.819556] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:47.328 [2024-07-15 14:12:11.819568] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:47.328 [2024-07-15 14:12:11.819581] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:47.328 [2024-07-15 14:12:11.819593] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:47.328 [2024-07-15 14:12:11.819605] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:47.328 [2024-07-15 14:12:11.819617] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:47.328 [2024-07-15 14:12:11.819632] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:47.328 [2024-07-15 14:12:11.819644] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:47.328 [2024-07-15 14:12:11.819657] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:47.328 [2024-07-15 14:12:11.819671] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:47.328 [2024-07-15 14:12:11.819685] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:47.328 [2024-07-15 14:12:11.819696] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:47.328 [2024-07-15 14:12:11.819709] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:47.328 [2024-07-15 14:12:11.819721] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:47.328 [2024-07-15 14:12:11.819734] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:47.328 [2024-07-15 14:12:11.819745] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:47.328 [2024-07-15 14:12:11.819758] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:47.328 [2024-07-15 14:12:11.819769] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:47.328 [2024-07-15 14:12:11.819782] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:47.328 [2024-07-15 14:12:11.819794] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:47.328 [2024-07-15 14:12:11.819807] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:47.328 [2024-07-15 14:12:11.819818] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:47.328 [2024-07-15 14:12:11.819833] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:47.328 [2024-07-15 14:12:11.819845] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:47.328 [2024-07-15 14:12:11.819858] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:47.328 [2024-07-15 14:12:11.819869] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:47.328 [2024-07-15 14:12:11.819884] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:47.328 [2024-07-15 14:12:11.819896] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:47.328 [2024-07-15 14:12:11.819909] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:47.328 [2024-07-15 14:12:11.819920] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:47.328 [2024-07-15 14:12:11.819933] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:47.328 [2024-07-15 14:12:11.819945] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:47.328 [2024-07-15 14:12:11.819957] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:47.328 [2024-07-15 14:12:11.819969] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:47.328 [2024-07-15 14:12:11.819983] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:47.328 [2024-07-15 14:12:11.819995] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:47.328 [2024-07-15 14:12:11.820010] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:47.328 [2024-07-15 14:12:11.820022] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:47.328 [2024-07-15 14:12:11.820037] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:47.328 [2024-07-15 14:12:11.820048] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:47.328 [2024-07-15 14:12:11.820061] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:47.328 [2024-07-15 14:12:11.820073] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:47.328 [2024-07-15 14:12:11.820091] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:47.328 [2024-07-15 14:12:11.820106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:47.328 [2024-07-15 14:12:11.820125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:47.328 [2024-07-15 14:12:11.820138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:47.328 [2024-07-15 14:12:11.820152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:47.328 [2024-07-15 14:12:11.820164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:47.328 [2024-07-15 14:12:11.820178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:47.328 [2024-07-15 14:12:11.820190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:47.328 [2024-07-15 14:12:11.820205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:47.328 [2024-07-15 14:12:11.820218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:47.328 [2024-07-15 14:12:11.820232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:47.328 [2024-07-15 14:12:11.820245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:47.328 [2024-07-15 14:12:11.820261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:47.328 [2024-07-15 14:12:11.820273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:47.328 [2024-07-15 14:12:11.820287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:47.328 [2024-07-15 14:12:11.820311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:47.328 [2024-07-15 14:12:11.820345] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:47.328 [2024-07-15 14:12:11.820359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:47.328 [2024-07-15 14:12:11.820374] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:47.328 [2024-07-15 14:12:11.820387] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:47.328 [2024-07-15 14:12:11.820401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:47.328 [2024-07-15 14:12:11.820414] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:47.328 [2024-07-15 14:12:11.820429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.328 [2024-07-15 14:12:11.820441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:47.328 [2024-07-15 14:12:11.820455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.133 ms 00:30:47.328 [2024-07-15 14:12:11.820468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.328 [2024-07-15 14:12:11.820528] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:47.328 [2024-07-15 14:12:11.820546] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:49.229 [2024-07-15 14:12:13.767963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.229 [2024-07-15 14:12:13.768044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:49.229 [2024-07-15 14:12:13.768068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1947.438 ms 00:30:49.229 [2024-07-15 14:12:13.768082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.488 [2024-07-15 14:12:13.800789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.488 [2024-07-15 14:12:13.800870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:49.488 [2024-07-15 14:12:13.800894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.417 ms 00:30:49.488 [2024-07-15 14:12:13.800908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.488 [2024-07-15 14:12:13.801049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.488 [2024-07-15 14:12:13.801069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:49.489 [2024-07-15 14:12:13.801085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:49.489 [2024-07-15 14:12:13.801101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.489 [2024-07-15 14:12:13.839762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.489 [2024-07-15 14:12:13.839828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:49.489 [2024-07-15 14:12:13.839851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.600 ms 00:30:49.489 [2024-07-15 14:12:13.839865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.489 [2024-07-15 14:12:13.839933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.489 [2024-07-15 14:12:13.839952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:49.489 [2024-07-15 14:12:13.839968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:49.489 [2024-07-15 14:12:13.839981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.489 [2024-07-15 14:12:13.840374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.489 [2024-07-15 14:12:13.840403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:49.489 [2024-07-15 14:12:13.840421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.299 ms 00:30:49.489 [2024-07-15 14:12:13.840434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.489 [2024-07-15 14:12:13.840499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.489 [2024-07-15 14:12:13.840518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:49.489 [2024-07-15 14:12:13.840537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:30:49.489 [2024-07-15 14:12:13.840549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.489 [2024-07-15 14:12:13.857931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.489 [2024-07-15 14:12:13.857995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:49.489 [2024-07-15 14:12:13.858018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.349 ms 00:30:49.489 [2024-07-15 14:12:13.858032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.489 [2024-07-15 14:12:13.871602] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:49.489 [2024-07-15 14:12:13.872481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.489 [2024-07-15 14:12:13.872519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:49.489 [2024-07-15 14:12:13.872538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.318 ms 00:30:49.489 [2024-07-15 14:12:13.872554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.489 [2024-07-15 14:12:13.907542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.489 [2024-07-15 14:12:13.907621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:30:49.489 [2024-07-15 14:12:13.907644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.939 ms 00:30:49.489 [2024-07-15 14:12:13.907659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.489 [2024-07-15 14:12:13.907783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.489 [2024-07-15 14:12:13.907810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:49.489 [2024-07-15 14:12:13.907825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:30:49.489 [2024-07-15 14:12:13.907841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.489 [2024-07-15 14:12:13.938720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.489 [2024-07-15 14:12:13.938774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:30:49.489 [2024-07-15 14:12:13.938794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.799 ms 00:30:49.489 [2024-07-15 14:12:13.938810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.489 [2024-07-15 14:12:13.975633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.489 [2024-07-15 14:12:13.975730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:30:49.489 [2024-07-15 14:12:13.975762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.765 ms 00:30:49.489 [2024-07-15 14:12:13.975787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.489 [2024-07-15 14:12:13.976854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.489 [2024-07-15 14:12:13.976928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:49.489 [2024-07-15 14:12:13.976955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.995 ms 00:30:49.489 [2024-07-15 14:12:13.976984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.748 [2024-07-15 14:12:14.070426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.748 [2024-07-15 14:12:14.070520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:30:49.748 [2024-07-15 14:12:14.070542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 93.348 ms 00:30:49.748 [2024-07-15 14:12:14.070562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.748 [2024-07-15 14:12:14.103764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.748 [2024-07-15 14:12:14.103844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:30:49.748 [2024-07-15 14:12:14.103866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.141 ms 00:30:49.748 [2024-07-15 14:12:14.103881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.748 [2024-07-15 14:12:14.136319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.748 [2024-07-15 14:12:14.136407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:30:49.748 [2024-07-15 14:12:14.136440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.376 ms 00:30:49.748 [2024-07-15 14:12:14.136456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.748 [2024-07-15 14:12:14.168849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.748 [2024-07-15 14:12:14.168922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:49.748 [2024-07-15 14:12:14.168944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.330 ms 00:30:49.748 [2024-07-15 14:12:14.168959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.748 [2024-07-15 14:12:14.169034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.748 [2024-07-15 14:12:14.169056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:49.748 [2024-07-15 14:12:14.169071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:49.748 [2024-07-15 14:12:14.169087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.748 [2024-07-15 14:12:14.169219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.748 [2024-07-15 14:12:14.169254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:49.748 [2024-07-15 14:12:14.169274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:30:49.748 [2024-07-15 14:12:14.169288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.748 [2024-07-15 14:12:14.170398] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2363.658 ms, result 0 00:30:49.748 { 00:30:49.748 "name": "ftl", 00:30:49.748 "uuid": "3e7932a1-a53c-4831-92f5-d4a8dbaa5201" 00:30:49.748 } 00:30:49.748 14:12:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:30:50.006 [2024-07-15 14:12:14.437620] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:50.006 14:12:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:30:50.264 14:12:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:30:50.523 [2024-07-15 14:12:14.982365] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:50.523 14:12:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:30:50.781 [2024-07-15 14:12:15.276002] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:50.781 14:12:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:30:51.348 14:12:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:30:51.348 14:12:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:30:51.348 14:12:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:30:51.348 14:12:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:30:51.348 14:12:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:30:51.348 14:12:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:30:51.348 14:12:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:30:51.348 14:12:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:30:51.348 14:12:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:30:51.348 14:12:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:51.348 Fill FTL, iteration 1 00:30:51.348 14:12:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:30:51.348 14:12:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:51.348 14:12:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:51.348 14:12:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:51.348 14:12:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:51.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:30:51.348 14:12:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:30:51.348 14:12:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=86066 00:30:51.348 14:12:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:30:51.348 14:12:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:30:51.348 14:12:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 86066 /var/tmp/spdk.tgt.sock 00:30:51.348 14:12:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 86066 ']' 00:30:51.348 14:12:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:30:51.348 14:12:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:51.348 14:12:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:30:51.348 14:12:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:51.348 14:12:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:51.348 [2024-07-15 14:12:15.767087] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:30:51.348 [2024-07-15 14:12:15.767262] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86066 ] 00:30:51.606 [2024-07-15 14:12:15.940550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.863 [2024-07-15 14:12:16.230348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:52.455 14:12:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:52.455 14:12:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:30:52.455 14:12:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:30:53.041 ftln1 00:30:53.041 14:12:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:30:53.041 14:12:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:30:53.299 14:12:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:30:53.299 14:12:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 86066 00:30:53.299 14:12:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 86066 ']' 00:30:53.299 14:12:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 86066 00:30:53.299 14:12:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:30:53.299 14:12:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:53.299 14:12:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86066 00:30:53.299 14:12:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:53.299 killing process with pid 86066 00:30:53.299 14:12:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:53.299 14:12:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86066' 00:30:53.299 14:12:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 86066 00:30:53.299 14:12:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 86066 00:30:55.199 14:12:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:30:55.199 14:12:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:55.458 [2024-07-15 14:12:19.748294] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:30:55.458 [2024-07-15 14:12:19.748485] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86119 ] 00:30:55.458 [2024-07-15 14:12:19.922571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.716 [2024-07-15 14:12:20.149337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:02.204  Copying: 211/1024 [MB] (211 MBps) Copying: 418/1024 [MB] (207 MBps) Copying: 626/1024 [MB] (208 MBps) Copying: 836/1024 [MB] (210 MBps) Copying: 1024/1024 [MB] (average 208 MBps) 00:31:02.204 00:31:02.204 Calculate MD5 checksum, iteration 1 00:31:02.204 14:12:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:31:02.204 14:12:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:31:02.204 14:12:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:02.204 14:12:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:02.204 14:12:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:02.204 14:12:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:02.204 14:12:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:02.204 14:12:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:02.204 [2024-07-15 14:12:26.721620] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:31:02.204 [2024-07-15 14:12:26.721823] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86189 ] 00:31:02.463 [2024-07-15 14:12:26.894686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.774 [2024-07-15 14:12:27.129836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:06.280  Copying: 501/1024 [MB] (501 MBps) Copying: 978/1024 [MB] (477 MBps) Copying: 1024/1024 [MB] (average 490 MBps) 00:31:06.280 00:31:06.280 14:12:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:31:06.280 14:12:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:08.812 14:12:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:31:08.812 Fill FTL, iteration 2 00:31:08.812 14:12:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=93566f87d107381cc281042623e08835 00:31:08.812 14:12:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:31:08.812 14:12:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:08.812 14:12:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:31:08.812 14:12:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:31:08.812 14:12:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:08.812 14:12:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:08.812 14:12:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:08.812 14:12:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:08.812 14:12:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:31:08.812 [2024-07-15 14:12:32.967324] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:31:08.812 [2024-07-15 14:12:32.967503] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86256 ] 00:31:08.812 [2024-07-15 14:12:33.138826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.070 [2024-07-15 14:12:33.367132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.552  Copying: 208/1024 [MB] (208 MBps) Copying: 407/1024 [MB] (199 MBps) Copying: 620/1024 [MB] (213 MBps) Copying: 835/1024 [MB] (215 MBps) Copying: 1024/1024 [MB] (average 207 MBps) 00:31:15.552 00:31:15.552 Calculate MD5 checksum, iteration 2 00:31:15.552 14:12:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:31:15.552 14:12:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:31:15.552 14:12:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:15.552 14:12:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:15.552 14:12:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:15.552 14:12:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:15.552 14:12:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:15.552 14:12:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:15.552 [2024-07-15 14:12:40.047441] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:31:15.552 [2024-07-15 14:12:40.047595] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86326 ] 00:31:15.810 [2024-07-15 14:12:40.213600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.067 [2024-07-15 14:12:40.402988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:20.165  Copying: 465/1024 [MB] (465 MBps) Copying: 913/1024 [MB] (448 MBps) Copying: 1024/1024 [MB] (average 457 MBps) 00:31:20.165 00:31:20.165 14:12:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:31:20.165 14:12:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:22.692 14:12:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:31:22.692 14:12:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=cdf9741576f2c674e423afa98fd87eab 00:31:22.692 14:12:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:31:22.692 14:12:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:22.692 14:12:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:22.948 [2024-07-15 14:12:47.442973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.948 [2024-07-15 14:12:47.443056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:22.949 [2024-07-15 14:12:47.443079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:31:22.949 [2024-07-15 14:12:47.443093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.949 [2024-07-15 14:12:47.443131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.949 [2024-07-15 14:12:47.443147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:22.949 [2024-07-15 14:12:47.443160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:22.949 [2024-07-15 14:12:47.443182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.949 [2024-07-15 14:12:47.443212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.949 [2024-07-15 14:12:47.443226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:22.949 [2024-07-15 14:12:47.443254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:22.949 [2024-07-15 14:12:47.443266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.949 [2024-07-15 14:12:47.443367] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.373 ms, result 0 00:31:22.949 true 00:31:22.949 14:12:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:23.230 { 00:31:23.230 "name": "ftl", 00:31:23.230 "properties": [ 00:31:23.230 { 00:31:23.230 "name": "superblock_version", 00:31:23.230 "value": 5, 00:31:23.230 "read-only": true 00:31:23.230 }, 00:31:23.230 { 00:31:23.230 "name": "base_device", 00:31:23.230 "bands": [ 00:31:23.230 { 00:31:23.230 "id": 0, 00:31:23.230 "state": "FREE", 00:31:23.230 "validity": 0.0 00:31:23.230 }, 00:31:23.230 { 00:31:23.230 "id": 1, 00:31:23.230 "state": "FREE", 00:31:23.230 "validity": 0.0 00:31:23.230 }, 00:31:23.230 { 00:31:23.230 "id": 2, 00:31:23.230 "state": "FREE", 00:31:23.230 "validity": 0.0 00:31:23.230 }, 00:31:23.230 { 00:31:23.230 "id": 3, 00:31:23.230 "state": "FREE", 00:31:23.230 "validity": 0.0 00:31:23.230 }, 00:31:23.230 { 00:31:23.230 "id": 4, 00:31:23.230 "state": "FREE", 00:31:23.230 "validity": 0.0 00:31:23.230 }, 00:31:23.230 { 00:31:23.230 "id": 5, 00:31:23.230 "state": "FREE", 00:31:23.230 "validity": 0.0 00:31:23.230 }, 00:31:23.230 { 00:31:23.230 "id": 6, 00:31:23.230 "state": "FREE", 00:31:23.230 "validity": 0.0 00:31:23.230 }, 00:31:23.230 { 00:31:23.230 "id": 7, 00:31:23.230 "state": "FREE", 00:31:23.230 "validity": 0.0 00:31:23.230 }, 00:31:23.230 { 00:31:23.230 "id": 8, 00:31:23.230 "state": "FREE", 00:31:23.230 "validity": 0.0 00:31:23.230 }, 00:31:23.230 { 00:31:23.230 "id": 9, 00:31:23.230 "state": "FREE", 00:31:23.230 "validity": 0.0 00:31:23.230 }, 00:31:23.230 { 00:31:23.230 "id": 10, 00:31:23.230 "state": "FREE", 00:31:23.230 "validity": 0.0 00:31:23.230 }, 00:31:23.230 { 00:31:23.230 "id": 11, 00:31:23.230 "state": "FREE", 00:31:23.230 "validity": 0.0 00:31:23.230 }, 00:31:23.230 { 00:31:23.230 "id": 12, 00:31:23.230 "state": "FREE", 00:31:23.230 "validity": 0.0 00:31:23.230 }, 00:31:23.230 { 00:31:23.230 "id": 13, 00:31:23.230 "state": "FREE", 00:31:23.230 "validity": 0.0 00:31:23.230 }, 00:31:23.230 { 00:31:23.230 "id": 14, 00:31:23.230 "state": "FREE", 00:31:23.230 "validity": 0.0 00:31:23.230 }, 00:31:23.230 { 00:31:23.230 "id": 15, 00:31:23.230 "state": "FREE", 00:31:23.230 "validity": 0.0 00:31:23.230 }, 00:31:23.230 { 00:31:23.230 "id": 16, 00:31:23.230 "state": "FREE", 00:31:23.230 "validity": 0.0 00:31:23.230 }, 00:31:23.230 { 00:31:23.230 "id": 17, 00:31:23.230 "state": "FREE", 00:31:23.230 "validity": 0.0 00:31:23.230 } 00:31:23.230 ], 00:31:23.230 "read-only": true 00:31:23.230 }, 00:31:23.230 { 00:31:23.230 "name": "cache_device", 00:31:23.230 "type": "bdev", 00:31:23.230 "chunks": [ 00:31:23.230 { 00:31:23.230 "id": 0, 00:31:23.230 "state": "INACTIVE", 00:31:23.230 "utilization": 0.0 00:31:23.230 }, 00:31:23.230 { 00:31:23.230 "id": 1, 00:31:23.230 "state": "CLOSED", 00:31:23.230 "utilization": 1.0 00:31:23.230 }, 00:31:23.230 { 00:31:23.230 "id": 2, 00:31:23.230 "state": "CLOSED", 00:31:23.230 "utilization": 1.0 00:31:23.230 }, 00:31:23.230 { 00:31:23.230 "id": 3, 00:31:23.230 "state": "OPEN", 00:31:23.230 "utilization": 0.001953125 00:31:23.230 }, 00:31:23.230 { 00:31:23.230 "id": 4, 00:31:23.230 "state": "OPEN", 00:31:23.230 "utilization": 0.0 00:31:23.230 } 00:31:23.230 ], 00:31:23.230 "read-only": true 00:31:23.230 }, 00:31:23.230 { 00:31:23.230 "name": "verbose_mode", 00:31:23.230 "value": true, 00:31:23.230 "unit": "", 00:31:23.230 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:23.230 }, 00:31:23.230 { 00:31:23.230 "name": "prep_upgrade_on_shutdown", 00:31:23.230 "value": false, 00:31:23.230 "unit": "", 00:31:23.230 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:23.230 } 00:31:23.230 ] 00:31:23.230 } 00:31:23.230 14:12:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:31:23.543 [2024-07-15 14:12:47.992144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.543 [2024-07-15 14:12:47.992215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:23.543 [2024-07-15 14:12:47.992236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:31:23.543 [2024-07-15 14:12:47.992248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.543 [2024-07-15 14:12:47.992285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.543 [2024-07-15 14:12:47.992317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:23.543 [2024-07-15 14:12:47.992333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:23.544 [2024-07-15 14:12:47.992345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.544 [2024-07-15 14:12:47.992374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.544 [2024-07-15 14:12:47.992388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:23.544 [2024-07-15 14:12:47.992400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:23.544 [2024-07-15 14:12:47.992411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.544 [2024-07-15 14:12:47.992502] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.334 ms, result 0 00:31:23.544 true 00:31:23.544 14:12:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:31:23.544 14:12:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:23.544 14:12:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:23.801 14:12:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:31:23.801 14:12:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:31:23.802 14:12:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:24.059 [2024-07-15 14:12:48.576960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.059 [2024-07-15 14:12:48.577038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:24.059 [2024-07-15 14:12:48.577059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:31:24.059 [2024-07-15 14:12:48.577072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.059 [2024-07-15 14:12:48.577111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.059 [2024-07-15 14:12:48.577127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:24.059 [2024-07-15 14:12:48.577139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:24.059 [2024-07-15 14:12:48.577150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.059 [2024-07-15 14:12:48.577178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.059 [2024-07-15 14:12:48.577192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:24.059 [2024-07-15 14:12:48.577203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:24.059 [2024-07-15 14:12:48.577214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.059 [2024-07-15 14:12:48.577292] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.321 ms, result 0 00:31:24.059 true 00:31:24.059 14:12:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:24.653 { 00:31:24.653 "name": "ftl", 00:31:24.653 "properties": [ 00:31:24.653 { 00:31:24.653 "name": "superblock_version", 00:31:24.653 "value": 5, 00:31:24.653 "read-only": true 00:31:24.653 }, 00:31:24.653 { 00:31:24.653 "name": "base_device", 00:31:24.653 "bands": [ 00:31:24.653 { 00:31:24.653 "id": 0, 00:31:24.653 "state": "FREE", 00:31:24.653 "validity": 0.0 00:31:24.653 }, 00:31:24.653 { 00:31:24.653 "id": 1, 00:31:24.653 "state": "FREE", 00:31:24.653 "validity": 0.0 00:31:24.653 }, 00:31:24.653 { 00:31:24.653 "id": 2, 00:31:24.653 "state": "FREE", 00:31:24.653 "validity": 0.0 00:31:24.653 }, 00:31:24.653 { 00:31:24.653 "id": 3, 00:31:24.653 "state": "FREE", 00:31:24.653 "validity": 0.0 00:31:24.653 }, 00:31:24.653 { 00:31:24.653 "id": 4, 00:31:24.653 "state": "FREE", 00:31:24.653 "validity": 0.0 00:31:24.653 }, 00:31:24.653 { 00:31:24.653 "id": 5, 00:31:24.653 "state": "FREE", 00:31:24.653 "validity": 0.0 00:31:24.653 }, 00:31:24.653 { 00:31:24.653 "id": 6, 00:31:24.653 "state": "FREE", 00:31:24.653 "validity": 0.0 00:31:24.653 }, 00:31:24.653 { 00:31:24.653 "id": 7, 00:31:24.653 "state": "FREE", 00:31:24.653 "validity": 0.0 00:31:24.653 }, 00:31:24.653 { 00:31:24.653 "id": 8, 00:31:24.653 "state": "FREE", 00:31:24.653 "validity": 0.0 00:31:24.653 }, 00:31:24.653 { 00:31:24.653 "id": 9, 00:31:24.653 "state": "FREE", 00:31:24.653 "validity": 0.0 00:31:24.653 }, 00:31:24.653 { 00:31:24.653 "id": 10, 00:31:24.653 "state": "FREE", 00:31:24.653 "validity": 0.0 00:31:24.653 }, 00:31:24.653 { 00:31:24.653 "id": 11, 00:31:24.653 "state": "FREE", 00:31:24.653 "validity": 0.0 00:31:24.653 }, 00:31:24.653 { 00:31:24.653 "id": 12, 00:31:24.653 "state": "FREE", 00:31:24.653 "validity": 0.0 00:31:24.653 }, 00:31:24.653 { 00:31:24.653 "id": 13, 00:31:24.653 "state": "FREE", 00:31:24.653 "validity": 0.0 00:31:24.653 }, 00:31:24.653 { 00:31:24.653 "id": 14, 00:31:24.653 "state": "FREE", 00:31:24.653 "validity": 0.0 00:31:24.653 }, 00:31:24.653 { 00:31:24.653 "id": 15, 00:31:24.653 "state": "FREE", 00:31:24.653 "validity": 0.0 00:31:24.653 }, 00:31:24.653 { 00:31:24.653 "id": 16, 00:31:24.653 "state": "FREE", 00:31:24.653 "validity": 0.0 00:31:24.653 }, 00:31:24.653 { 00:31:24.653 "id": 17, 00:31:24.653 "state": "FREE", 00:31:24.653 "validity": 0.0 00:31:24.653 } 00:31:24.653 ], 00:31:24.653 "read-only": true 00:31:24.653 }, 00:31:24.653 { 00:31:24.653 "name": "cache_device", 00:31:24.653 "type": "bdev", 00:31:24.653 "chunks": [ 00:31:24.653 { 00:31:24.653 "id": 0, 00:31:24.653 "state": "INACTIVE", 00:31:24.653 "utilization": 0.0 00:31:24.653 }, 00:31:24.653 { 00:31:24.653 "id": 1, 00:31:24.653 "state": "CLOSED", 00:31:24.653 "utilization": 1.0 00:31:24.653 }, 00:31:24.653 { 00:31:24.653 "id": 2, 00:31:24.653 "state": "CLOSED", 00:31:24.653 "utilization": 1.0 00:31:24.653 }, 00:31:24.653 { 00:31:24.653 "id": 3, 00:31:24.653 "state": "OPEN", 00:31:24.653 "utilization": 0.001953125 00:31:24.653 }, 00:31:24.653 { 00:31:24.653 "id": 4, 00:31:24.653 "state": "OPEN", 00:31:24.653 "utilization": 0.0 00:31:24.653 } 00:31:24.653 ], 00:31:24.653 "read-only": true 00:31:24.653 }, 00:31:24.653 { 00:31:24.653 "name": "verbose_mode", 00:31:24.653 "value": true, 00:31:24.653 "unit": "", 00:31:24.653 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:24.653 }, 00:31:24.653 { 00:31:24.653 "name": "prep_upgrade_on_shutdown", 00:31:24.653 "value": true, 00:31:24.653 "unit": "", 00:31:24.653 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:24.653 } 00:31:24.653 ] 00:31:24.653 } 00:31:24.653 14:12:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:31:24.653 14:12:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85949 ]] 00:31:24.653 14:12:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85949 00:31:24.653 14:12:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 85949 ']' 00:31:24.653 14:12:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 85949 00:31:24.653 14:12:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:31:24.653 14:12:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:24.653 14:12:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85949 00:31:24.653 killing process with pid 85949 00:31:24.653 14:12:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:24.653 14:12:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:24.653 14:12:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85949' 00:31:24.653 14:12:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 85949 00:31:24.653 14:12:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 85949 00:31:25.586 [2024-07-15 14:12:49.965121] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:25.586 [2024-07-15 14:12:49.982916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.586 [2024-07-15 14:12:49.982993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:25.586 [2024-07-15 14:12:49.983013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:25.586 [2024-07-15 14:12:49.983025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.586 [2024-07-15 14:12:49.983059] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:25.586 [2024-07-15 14:12:49.986577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.586 [2024-07-15 14:12:49.986629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:25.586 [2024-07-15 14:12:49.986663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.490 ms 00:31:25.586 [2024-07-15 14:12:49.986677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.555 [2024-07-15 14:12:59.079103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.555 [2024-07-15 14:12:59.079194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:35.555 [2024-07-15 14:12:59.079234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9092.419 ms 00:31:35.555 [2024-07-15 14:12:59.079252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.555 [2024-07-15 14:12:59.080649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.555 [2024-07-15 14:12:59.080687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:35.555 [2024-07-15 14:12:59.080713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.362 ms 00:31:35.555 [2024-07-15 14:12:59.080725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.555 [2024-07-15 14:12:59.081988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.555 [2024-07-15 14:12:59.082020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:35.555 [2024-07-15 14:12:59.082035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.220 ms 00:31:35.555 [2024-07-15 14:12:59.082046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.555 [2024-07-15 14:12:59.095114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.555 [2024-07-15 14:12:59.095209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:35.555 [2024-07-15 14:12:59.095229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.000 ms 00:31:35.555 [2024-07-15 14:12:59.095243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.555 [2024-07-15 14:12:59.103338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.555 [2024-07-15 14:12:59.103467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:35.555 [2024-07-15 14:12:59.103490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.975 ms 00:31:35.555 [2024-07-15 14:12:59.103502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.555 [2024-07-15 14:12:59.103707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.555 [2024-07-15 14:12:59.103734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:35.555 [2024-07-15 14:12:59.103750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.117 ms 00:31:35.555 [2024-07-15 14:12:59.103761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.555 [2024-07-15 14:12:59.116717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.555 [2024-07-15 14:12:59.116806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:31:35.555 [2024-07-15 14:12:59.116827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.923 ms 00:31:35.555 [2024-07-15 14:12:59.116838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.555 [2024-07-15 14:12:59.129677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.555 [2024-07-15 14:12:59.129759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:31:35.555 [2024-07-15 14:12:59.129779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.768 ms 00:31:35.555 [2024-07-15 14:12:59.129791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.555 [2024-07-15 14:12:59.142849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.555 [2024-07-15 14:12:59.142942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:35.555 [2024-07-15 14:12:59.142963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.978 ms 00:31:35.555 [2024-07-15 14:12:59.142975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.555 [2024-07-15 14:12:59.156184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.555 [2024-07-15 14:12:59.156284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:35.555 [2024-07-15 14:12:59.156315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.037 ms 00:31:35.555 [2024-07-15 14:12:59.156329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.555 [2024-07-15 14:12:59.156427] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:35.555 [2024-07-15 14:12:59.156455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:35.555 [2024-07-15 14:12:59.156490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:35.555 [2024-07-15 14:12:59.156503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:35.555 [2024-07-15 14:12:59.156515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:35.555 [2024-07-15 14:12:59.156528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:35.555 [2024-07-15 14:12:59.156540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:35.555 [2024-07-15 14:12:59.156551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:35.555 [2024-07-15 14:12:59.156563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:35.555 [2024-07-15 14:12:59.156575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:35.555 [2024-07-15 14:12:59.156588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:35.555 [2024-07-15 14:12:59.156600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:35.555 [2024-07-15 14:12:59.156611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:35.555 [2024-07-15 14:12:59.156623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:35.555 [2024-07-15 14:12:59.156635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:35.555 [2024-07-15 14:12:59.156647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:35.556 [2024-07-15 14:12:59.156679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:35.556 [2024-07-15 14:12:59.156691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:35.556 [2024-07-15 14:12:59.156703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:35.556 [2024-07-15 14:12:59.156719] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:35.556 [2024-07-15 14:12:59.156730] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 3e7932a1-a53c-4831-92f5-d4a8dbaa5201 00:31:35.556 [2024-07-15 14:12:59.156742] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:35.556 [2024-07-15 14:12:59.156754] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:31:35.556 [2024-07-15 14:12:59.156764] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:31:35.556 [2024-07-15 14:12:59.156777] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:31:35.556 [2024-07-15 14:12:59.156789] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:35.556 [2024-07-15 14:12:59.156801] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:35.556 [2024-07-15 14:12:59.156812] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:35.556 [2024-07-15 14:12:59.156822] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:35.556 [2024-07-15 14:12:59.156834] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:35.556 [2024-07-15 14:12:59.156846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.556 [2024-07-15 14:12:59.156857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:35.556 [2024-07-15 14:12:59.156869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.423 ms 00:31:35.556 [2024-07-15 14:12:59.156886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.556 [2024-07-15 14:12:59.174805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.556 [2024-07-15 14:12:59.174896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:35.556 [2024-07-15 14:12:59.174918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.849 ms 00:31:35.556 [2024-07-15 14:12:59.174930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.556 [2024-07-15 14:12:59.175468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.556 [2024-07-15 14:12:59.175495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:35.556 [2024-07-15 14:12:59.175522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.450 ms 00:31:35.556 [2024-07-15 14:12:59.175533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.556 [2024-07-15 14:12:59.228751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:35.556 [2024-07-15 14:12:59.228840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:35.556 [2024-07-15 14:12:59.228860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:35.556 [2024-07-15 14:12:59.228872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.556 [2024-07-15 14:12:59.228945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:35.556 [2024-07-15 14:12:59.228961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:35.556 [2024-07-15 14:12:59.228988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:35.556 [2024-07-15 14:12:59.229000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.556 [2024-07-15 14:12:59.229174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:35.556 [2024-07-15 14:12:59.229196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:35.556 [2024-07-15 14:12:59.229210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:35.556 [2024-07-15 14:12:59.229221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.556 [2024-07-15 14:12:59.229249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:35.556 [2024-07-15 14:12:59.229274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:35.556 [2024-07-15 14:12:59.229286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:35.556 [2024-07-15 14:12:59.229318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.556 [2024-07-15 14:12:59.351479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:35.556 [2024-07-15 14:12:59.351589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:35.556 [2024-07-15 14:12:59.351618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:35.556 [2024-07-15 14:12:59.351636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.556 [2024-07-15 14:12:59.479815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:35.556 [2024-07-15 14:12:59.479900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:35.556 [2024-07-15 14:12:59.479947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:35.556 [2024-07-15 14:12:59.479968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.556 [2024-07-15 14:12:59.480107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:35.556 [2024-07-15 14:12:59.480137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:35.556 [2024-07-15 14:12:59.480156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:35.556 [2024-07-15 14:12:59.480172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.556 [2024-07-15 14:12:59.480248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:35.556 [2024-07-15 14:12:59.480270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:35.556 [2024-07-15 14:12:59.480289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:35.556 [2024-07-15 14:12:59.480335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.556 [2024-07-15 14:12:59.480511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:35.556 [2024-07-15 14:12:59.480538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:35.556 [2024-07-15 14:12:59.480556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:35.556 [2024-07-15 14:12:59.480572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.556 [2024-07-15 14:12:59.480641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:35.556 [2024-07-15 14:12:59.480665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:35.556 [2024-07-15 14:12:59.480684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:35.556 [2024-07-15 14:12:59.480700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.556 [2024-07-15 14:12:59.480801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:35.556 [2024-07-15 14:12:59.480831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:35.556 [2024-07-15 14:12:59.480852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:35.556 [2024-07-15 14:12:59.480870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.556 [2024-07-15 14:12:59.480973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:35.556 [2024-07-15 14:12:59.481002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:35.556 [2024-07-15 14:12:59.481022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:35.556 [2024-07-15 14:12:59.481041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.556 [2024-07-15 14:12:59.481344] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9498.379 ms, result 0 00:31:38.084 14:13:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:38.084 14:13:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:31:38.084 14:13:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:38.085 14:13:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:38.085 14:13:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:38.085 14:13:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=86547 00:31:38.085 14:13:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:38.085 14:13:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:38.085 14:13:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 86547 00:31:38.085 14:13:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 86547 ']' 00:31:38.085 14:13:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:38.085 14:13:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:38.085 14:13:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:38.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:38.085 14:13:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:38.085 14:13:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:38.343 [2024-07-15 14:13:02.682604] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:31:38.343 [2024-07-15 14:13:02.682771] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86547 ] 00:31:38.343 [2024-07-15 14:13:02.844220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:38.610 [2024-07-15 14:13:03.127565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:39.550 [2024-07-15 14:13:03.937154] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:39.550 [2024-07-15 14:13:03.937243] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:39.550 [2024-07-15 14:13:04.086173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.550 [2024-07-15 14:13:04.086244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:39.550 [2024-07-15 14:13:04.086270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:39.550 [2024-07-15 14:13:04.086283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.550 [2024-07-15 14:13:04.086386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.550 [2024-07-15 14:13:04.086408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:39.550 [2024-07-15 14:13:04.086421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:31:39.550 [2024-07-15 14:13:04.086433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.550 [2024-07-15 14:13:04.086467] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:39.550 [2024-07-15 14:13:04.087524] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:39.550 [2024-07-15 14:13:04.087568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.550 [2024-07-15 14:13:04.087583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:39.550 [2024-07-15 14:13:04.087597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.108 ms 00:31:39.550 [2024-07-15 14:13:04.087608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.550 [2024-07-15 14:13:04.088807] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:39.810 [2024-07-15 14:13:04.105636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.810 [2024-07-15 14:13:04.105715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:39.810 [2024-07-15 14:13:04.105735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.826 ms 00:31:39.810 [2024-07-15 14:13:04.105748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.810 [2024-07-15 14:13:04.105910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.810 [2024-07-15 14:13:04.105937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:39.810 [2024-07-15 14:13:04.105951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:31:39.810 [2024-07-15 14:13:04.105962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.810 [2024-07-15 14:13:04.110971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.810 [2024-07-15 14:13:04.111070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:39.810 [2024-07-15 14:13:04.111102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.852 ms 00:31:39.810 [2024-07-15 14:13:04.111123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.810 [2024-07-15 14:13:04.111299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.810 [2024-07-15 14:13:04.111372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:39.810 [2024-07-15 14:13:04.111395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.092 ms 00:31:39.810 [2024-07-15 14:13:04.111423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.810 [2024-07-15 14:13:04.111563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.810 [2024-07-15 14:13:04.111608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:39.810 [2024-07-15 14:13:04.111642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:31:39.810 [2024-07-15 14:13:04.111669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.810 [2024-07-15 14:13:04.111755] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:39.810 [2024-07-15 14:13:04.116426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.810 [2024-07-15 14:13:04.116477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:39.810 [2024-07-15 14:13:04.116494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.690 ms 00:31:39.810 [2024-07-15 14:13:04.116506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.810 [2024-07-15 14:13:04.116569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.810 [2024-07-15 14:13:04.116587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:39.810 [2024-07-15 14:13:04.116600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:39.810 [2024-07-15 14:13:04.116617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.810 [2024-07-15 14:13:04.116702] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:39.810 [2024-07-15 14:13:04.116738] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:39.810 [2024-07-15 14:13:04.116782] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:39.810 [2024-07-15 14:13:04.116804] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:31:39.810 [2024-07-15 14:13:04.116910] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:39.810 [2024-07-15 14:13:04.116925] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:39.810 [2024-07-15 14:13:04.116946] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:31:39.810 [2024-07-15 14:13:04.116961] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:39.810 [2024-07-15 14:13:04.116974] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:39.810 [2024-07-15 14:13:04.116986] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:39.810 [2024-07-15 14:13:04.116997] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:39.810 [2024-07-15 14:13:04.117008] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:39.810 [2024-07-15 14:13:04.117019] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:39.810 [2024-07-15 14:13:04.117030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.810 [2024-07-15 14:13:04.117041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:39.810 [2024-07-15 14:13:04.117053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.332 ms 00:31:39.810 [2024-07-15 14:13:04.117064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.810 [2024-07-15 14:13:04.117168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.810 [2024-07-15 14:13:04.117184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:39.810 [2024-07-15 14:13:04.117196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:31:39.810 [2024-07-15 14:13:04.117213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.810 [2024-07-15 14:13:04.117343] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:39.810 [2024-07-15 14:13:04.117364] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:39.810 [2024-07-15 14:13:04.117377] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:39.810 [2024-07-15 14:13:04.117389] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:39.810 [2024-07-15 14:13:04.117407] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:39.810 [2024-07-15 14:13:04.117418] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:39.810 [2024-07-15 14:13:04.117429] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:39.810 [2024-07-15 14:13:04.117439] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:39.810 [2024-07-15 14:13:04.117450] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:39.810 [2024-07-15 14:13:04.117460] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:39.810 [2024-07-15 14:13:04.117471] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:39.810 [2024-07-15 14:13:04.117481] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:39.810 [2024-07-15 14:13:04.117491] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:39.810 [2024-07-15 14:13:04.117502] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:39.810 [2024-07-15 14:13:04.117512] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:39.810 [2024-07-15 14:13:04.117523] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:39.810 [2024-07-15 14:13:04.117533] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:39.810 [2024-07-15 14:13:04.117543] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:39.810 [2024-07-15 14:13:04.117553] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:39.810 [2024-07-15 14:13:04.117564] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:39.810 [2024-07-15 14:13:04.117574] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:39.810 [2024-07-15 14:13:04.117585] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:39.810 [2024-07-15 14:13:04.117595] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:39.810 [2024-07-15 14:13:04.117606] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:39.810 [2024-07-15 14:13:04.117616] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:39.810 [2024-07-15 14:13:04.117626] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:39.810 [2024-07-15 14:13:04.117637] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:39.810 [2024-07-15 14:13:04.117648] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:39.810 [2024-07-15 14:13:04.117658] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:39.810 [2024-07-15 14:13:04.117668] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:39.810 [2024-07-15 14:13:04.117678] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:39.810 [2024-07-15 14:13:04.117689] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:39.810 [2024-07-15 14:13:04.117699] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:39.810 [2024-07-15 14:13:04.117709] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:39.810 [2024-07-15 14:13:04.117719] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:39.810 [2024-07-15 14:13:04.117730] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:39.810 [2024-07-15 14:13:04.117740] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:39.810 [2024-07-15 14:13:04.117751] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:39.811 [2024-07-15 14:13:04.117761] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:39.811 [2024-07-15 14:13:04.117771] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:39.811 [2024-07-15 14:13:04.117782] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:39.811 [2024-07-15 14:13:04.117792] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:39.811 [2024-07-15 14:13:04.117802] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:39.811 [2024-07-15 14:13:04.117812] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:39.811 [2024-07-15 14:13:04.117829] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:39.811 [2024-07-15 14:13:04.117840] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:39.811 [2024-07-15 14:13:04.117851] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:39.811 [2024-07-15 14:13:04.117862] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:39.811 [2024-07-15 14:13:04.117873] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:39.811 [2024-07-15 14:13:04.117883] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:39.811 [2024-07-15 14:13:04.117894] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:39.811 [2024-07-15 14:13:04.117919] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:39.811 [2024-07-15 14:13:04.117930] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:39.811 [2024-07-15 14:13:04.117942] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:39.811 [2024-07-15 14:13:04.117956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:39.811 [2024-07-15 14:13:04.117970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:39.811 [2024-07-15 14:13:04.117981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:39.811 [2024-07-15 14:13:04.117993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:39.811 [2024-07-15 14:13:04.118004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:39.811 [2024-07-15 14:13:04.118015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:39.811 [2024-07-15 14:13:04.118027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:39.811 [2024-07-15 14:13:04.118038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:39.811 [2024-07-15 14:13:04.118049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:39.811 [2024-07-15 14:13:04.118060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:39.811 [2024-07-15 14:13:04.118071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:39.811 [2024-07-15 14:13:04.118082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:39.811 [2024-07-15 14:13:04.118093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:39.811 [2024-07-15 14:13:04.118104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:39.811 [2024-07-15 14:13:04.118122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:39.811 [2024-07-15 14:13:04.118134] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:39.811 [2024-07-15 14:13:04.118147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:39.811 [2024-07-15 14:13:04.118160] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:39.811 [2024-07-15 14:13:04.118171] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:39.811 [2024-07-15 14:13:04.118182] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:39.811 [2024-07-15 14:13:04.118194] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:39.811 [2024-07-15 14:13:04.118206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.811 [2024-07-15 14:13:04.118218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:39.811 [2024-07-15 14:13:04.118230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.946 ms 00:31:39.811 [2024-07-15 14:13:04.118246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.811 [2024-07-15 14:13:04.118325] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:31:39.811 [2024-07-15 14:13:04.118351] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:31:41.710 [2024-07-15 14:13:06.128523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:41.710 [2024-07-15 14:13:06.128601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:31:41.710 [2024-07-15 14:13:06.128623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2010.207 ms 00:31:41.710 [2024-07-15 14:13:06.128635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:41.710 [2024-07-15 14:13:06.161696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:41.710 [2024-07-15 14:13:06.161757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:41.710 [2024-07-15 14:13:06.161778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.745 ms 00:31:41.710 [2024-07-15 14:13:06.161797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:41.710 [2024-07-15 14:13:06.161962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:41.710 [2024-07-15 14:13:06.161986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:41.710 [2024-07-15 14:13:06.161999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:31:41.710 [2024-07-15 14:13:06.162011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:41.710 [2024-07-15 14:13:06.200907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:41.710 [2024-07-15 14:13:06.200972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:41.710 [2024-07-15 14:13:06.200992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.833 ms 00:31:41.710 [2024-07-15 14:13:06.201004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:41.710 [2024-07-15 14:13:06.201084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:41.710 [2024-07-15 14:13:06.201101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:41.710 [2024-07-15 14:13:06.201115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:41.710 [2024-07-15 14:13:06.201126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:41.710 [2024-07-15 14:13:06.201581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:41.710 [2024-07-15 14:13:06.201603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:41.710 [2024-07-15 14:13:06.201624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.373 ms 00:31:41.710 [2024-07-15 14:13:06.201635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:41.710 [2024-07-15 14:13:06.201698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:41.710 [2024-07-15 14:13:06.201715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:41.710 [2024-07-15 14:13:06.201727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:31:41.710 [2024-07-15 14:13:06.201738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:41.710 [2024-07-15 14:13:06.219484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:41.710 [2024-07-15 14:13:06.219544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:41.710 [2024-07-15 14:13:06.219565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.714 ms 00:31:41.710 [2024-07-15 14:13:06.219576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:41.710 [2024-07-15 14:13:06.236362] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:41.710 [2024-07-15 14:13:06.236438] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:41.710 [2024-07-15 14:13:06.236460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:41.710 [2024-07-15 14:13:06.236481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:31:41.710 [2024-07-15 14:13:06.236507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.703 ms 00:31:41.710 [2024-07-15 14:13:06.236528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:41.710 [2024-07-15 14:13:06.255121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:41.710 [2024-07-15 14:13:06.255202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:31:41.710 [2024-07-15 14:13:06.255223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.474 ms 00:31:41.710 [2024-07-15 14:13:06.255236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:41.967 [2024-07-15 14:13:06.271505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:41.967 [2024-07-15 14:13:06.271579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:31:41.967 [2024-07-15 14:13:06.271599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.159 ms 00:31:41.967 [2024-07-15 14:13:06.271611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:41.967 [2024-07-15 14:13:06.288365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:41.967 [2024-07-15 14:13:06.288471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:31:41.967 [2024-07-15 14:13:06.288503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.653 ms 00:31:41.967 [2024-07-15 14:13:06.288522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:41.967 [2024-07-15 14:13:06.289801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:41.967 [2024-07-15 14:13:06.289858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:41.967 [2024-07-15 14:13:06.289887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.968 ms 00:31:41.967 [2024-07-15 14:13:06.289908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:41.967 [2024-07-15 14:13:06.378480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:41.967 [2024-07-15 14:13:06.378561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:41.967 [2024-07-15 14:13:06.378582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 88.513 ms 00:31:41.967 [2024-07-15 14:13:06.378594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:41.967 [2024-07-15 14:13:06.391963] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:41.967 [2024-07-15 14:13:06.392993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:41.967 [2024-07-15 14:13:06.393046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:41.967 [2024-07-15 14:13:06.393072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.253 ms 00:31:41.967 [2024-07-15 14:13:06.393102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:41.967 [2024-07-15 14:13:06.393265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:41.967 [2024-07-15 14:13:06.393294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:31:41.967 [2024-07-15 14:13:06.393335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:31:41.967 [2024-07-15 14:13:06.393355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:41.967 [2024-07-15 14:13:06.393460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:41.967 [2024-07-15 14:13:06.393482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:41.967 [2024-07-15 14:13:06.393495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:31:41.967 [2024-07-15 14:13:06.393507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:41.967 [2024-07-15 14:13:06.393554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:41.967 [2024-07-15 14:13:06.393569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:41.967 [2024-07-15 14:13:06.393582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:41.967 [2024-07-15 14:13:06.393592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:41.967 [2024-07-15 14:13:06.393635] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:41.967 [2024-07-15 14:13:06.393652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:41.967 [2024-07-15 14:13:06.393664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:41.967 [2024-07-15 14:13:06.393675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:31:41.967 [2024-07-15 14:13:06.393686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:41.967 [2024-07-15 14:13:06.425719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:41.967 [2024-07-15 14:13:06.425796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:31:41.967 [2024-07-15 14:13:06.425817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.999 ms 00:31:41.967 [2024-07-15 14:13:06.425829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:41.967 [2024-07-15 14:13:06.425978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:41.967 [2024-07-15 14:13:06.425998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:41.967 [2024-07-15 14:13:06.426022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:31:41.967 [2024-07-15 14:13:06.426033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:41.967 [2024-07-15 14:13:06.427513] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2340.764 ms, result 0 00:31:41.967 [2024-07-15 14:13:06.442252] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:41.968 [2024-07-15 14:13:06.458295] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:41.968 [2024-07-15 14:13:06.467553] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:42.899 14:13:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:42.899 14:13:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:31:42.899 14:13:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:42.900 14:13:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:42.900 14:13:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:43.157 [2024-07-15 14:13:07.556761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.157 [2024-07-15 14:13:07.556837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:43.157 [2024-07-15 14:13:07.556859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:31:43.157 [2024-07-15 14:13:07.556872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.157 [2024-07-15 14:13:07.556910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.157 [2024-07-15 14:13:07.556932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:43.157 [2024-07-15 14:13:07.556945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:43.157 [2024-07-15 14:13:07.556958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.157 [2024-07-15 14:13:07.557021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.157 [2024-07-15 14:13:07.557049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:43.157 [2024-07-15 14:13:07.557072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:43.157 [2024-07-15 14:13:07.557084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.157 [2024-07-15 14:13:07.557175] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.398 ms, result 0 00:31:43.157 true 00:31:43.157 14:13:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:43.415 { 00:31:43.415 "name": "ftl", 00:31:43.415 "properties": [ 00:31:43.415 { 00:31:43.415 "name": "superblock_version", 00:31:43.415 "value": 5, 00:31:43.415 "read-only": true 00:31:43.415 }, 00:31:43.415 { 00:31:43.415 "name": "base_device", 00:31:43.415 "bands": [ 00:31:43.415 { 00:31:43.415 "id": 0, 00:31:43.415 "state": "CLOSED", 00:31:43.415 "validity": 1.0 00:31:43.415 }, 00:31:43.415 { 00:31:43.415 "id": 1, 00:31:43.415 "state": "CLOSED", 00:31:43.415 "validity": 1.0 00:31:43.415 }, 00:31:43.415 { 00:31:43.415 "id": 2, 00:31:43.415 "state": "CLOSED", 00:31:43.415 "validity": 0.007843137254901933 00:31:43.415 }, 00:31:43.415 { 00:31:43.415 "id": 3, 00:31:43.415 "state": "FREE", 00:31:43.415 "validity": 0.0 00:31:43.415 }, 00:31:43.415 { 00:31:43.415 "id": 4, 00:31:43.415 "state": "FREE", 00:31:43.415 "validity": 0.0 00:31:43.415 }, 00:31:43.415 { 00:31:43.415 "id": 5, 00:31:43.415 "state": "FREE", 00:31:43.415 "validity": 0.0 00:31:43.415 }, 00:31:43.415 { 00:31:43.415 "id": 6, 00:31:43.415 "state": "FREE", 00:31:43.415 "validity": 0.0 00:31:43.415 }, 00:31:43.415 { 00:31:43.415 "id": 7, 00:31:43.415 "state": "FREE", 00:31:43.415 "validity": 0.0 00:31:43.415 }, 00:31:43.415 { 00:31:43.415 "id": 8, 00:31:43.415 "state": "FREE", 00:31:43.415 "validity": 0.0 00:31:43.415 }, 00:31:43.415 { 00:31:43.415 "id": 9, 00:31:43.415 "state": "FREE", 00:31:43.415 "validity": 0.0 00:31:43.415 }, 00:31:43.415 { 00:31:43.415 "id": 10, 00:31:43.415 "state": "FREE", 00:31:43.415 "validity": 0.0 00:31:43.415 }, 00:31:43.415 { 00:31:43.415 "id": 11, 00:31:43.415 "state": "FREE", 00:31:43.415 "validity": 0.0 00:31:43.415 }, 00:31:43.415 { 00:31:43.415 "id": 12, 00:31:43.415 "state": "FREE", 00:31:43.415 "validity": 0.0 00:31:43.415 }, 00:31:43.415 { 00:31:43.415 "id": 13, 00:31:43.415 "state": "FREE", 00:31:43.415 "validity": 0.0 00:31:43.415 }, 00:31:43.415 { 00:31:43.415 "id": 14, 00:31:43.415 "state": "FREE", 00:31:43.415 "validity": 0.0 00:31:43.415 }, 00:31:43.415 { 00:31:43.415 "id": 15, 00:31:43.415 "state": "FREE", 00:31:43.415 "validity": 0.0 00:31:43.415 }, 00:31:43.415 { 00:31:43.415 "id": 16, 00:31:43.415 "state": "FREE", 00:31:43.415 "validity": 0.0 00:31:43.415 }, 00:31:43.415 { 00:31:43.415 "id": 17, 00:31:43.415 "state": "FREE", 00:31:43.415 "validity": 0.0 00:31:43.415 } 00:31:43.415 ], 00:31:43.415 "read-only": true 00:31:43.415 }, 00:31:43.415 { 00:31:43.415 "name": "cache_device", 00:31:43.415 "type": "bdev", 00:31:43.415 "chunks": [ 00:31:43.415 { 00:31:43.415 "id": 0, 00:31:43.415 "state": "INACTIVE", 00:31:43.415 "utilization": 0.0 00:31:43.415 }, 00:31:43.415 { 00:31:43.415 "id": 1, 00:31:43.415 "state": "OPEN", 00:31:43.415 "utilization": 0.0 00:31:43.415 }, 00:31:43.415 { 00:31:43.415 "id": 2, 00:31:43.415 "state": "OPEN", 00:31:43.415 "utilization": 0.0 00:31:43.415 }, 00:31:43.415 { 00:31:43.415 "id": 3, 00:31:43.415 "state": "FREE", 00:31:43.415 "utilization": 0.0 00:31:43.415 }, 00:31:43.415 { 00:31:43.415 "id": 4, 00:31:43.415 "state": "FREE", 00:31:43.415 "utilization": 0.0 00:31:43.415 } 00:31:43.415 ], 00:31:43.415 "read-only": true 00:31:43.415 }, 00:31:43.415 { 00:31:43.415 "name": "verbose_mode", 00:31:43.415 "value": true, 00:31:43.415 "unit": "", 00:31:43.415 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:43.415 }, 00:31:43.415 { 00:31:43.415 "name": "prep_upgrade_on_shutdown", 00:31:43.415 "value": false, 00:31:43.415 "unit": "", 00:31:43.415 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:43.415 } 00:31:43.415 ] 00:31:43.415 } 00:31:43.415 14:13:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:31:43.415 14:13:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:43.415 14:13:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:43.673 14:13:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:31:43.673 14:13:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:31:43.673 14:13:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:31:43.673 14:13:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:43.673 14:13:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:31:44.250 Validate MD5 checksum, iteration 1 00:31:44.250 14:13:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:31:44.250 14:13:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:31:44.250 14:13:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:31:44.250 14:13:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:44.250 14:13:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:44.250 14:13:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:44.250 14:13:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:44.250 14:13:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:44.250 14:13:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:44.250 14:13:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:44.250 14:13:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:44.250 14:13:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:44.250 14:13:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:44.250 [2024-07-15 14:13:08.640954] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:31:44.250 [2024-07-15 14:13:08.641107] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86622 ] 00:31:44.507 [2024-07-15 14:13:08.800905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:44.765 [2024-07-15 14:13:09.136666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:50.067  Copying: 390/1024 [MB] (390 MBps) Copying: 772/1024 [MB] (382 MBps) Copying: 1024/1024 [MB] (average 380 MBps) 00:31:50.067 00:31:50.067 14:13:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:50.067 14:13:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:52.591 14:13:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:52.591 Validate MD5 checksum, iteration 2 00:31:52.591 14:13:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=93566f87d107381cc281042623e08835 00:31:52.591 14:13:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 93566f87d107381cc281042623e08835 != \9\3\5\6\6\f\8\7\d\1\0\7\3\8\1\c\c\2\8\1\0\4\2\6\2\3\e\0\8\8\3\5 ]] 00:31:52.591 14:13:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:52.591 14:13:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:52.591 14:13:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:52.591 14:13:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:52.591 14:13:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:52.591 14:13:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:52.591 14:13:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:52.591 14:13:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:52.591 14:13:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:52.591 [2024-07-15 14:13:16.742138] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:31:52.591 [2024-07-15 14:13:16.742406] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86703 ] 00:31:52.591 [2024-07-15 14:13:16.923433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.856 [2024-07-15 14:13:17.222863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.211  Copying: 426/1024 [MB] (426 MBps) Copying: 838/1024 [MB] (412 MBps) Copying: 1024/1024 [MB] (average 417 MBps) 00:31:59.211 00:31:59.211 14:13:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:59.211 14:13:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:01.733 14:13:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:01.733 14:13:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=cdf9741576f2c674e423afa98fd87eab 00:32:01.733 14:13:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ cdf9741576f2c674e423afa98fd87eab != \c\d\f\9\7\4\1\5\7\6\f\2\c\6\7\4\e\4\2\3\a\f\a\9\8\f\d\8\7\e\a\b ]] 00:32:01.733 14:13:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:01.733 14:13:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:01.733 14:13:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:32:01.733 14:13:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 86547 ]] 00:32:01.733 14:13:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 86547 00:32:01.733 14:13:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:32:01.733 14:13:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:32:01.733 14:13:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:01.733 14:13:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:01.733 14:13:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:01.733 14:13:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:01.733 14:13:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=86798 00:32:01.733 14:13:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:01.733 14:13:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 86798 00:32:01.733 14:13:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 86798 ']' 00:32:01.733 14:13:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:01.733 14:13:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:01.733 14:13:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:01.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:01.733 14:13:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:01.733 14:13:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:01.733 [2024-07-15 14:13:26.069958] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:32:01.733 [2024-07-15 14:13:26.070138] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86798 ] 00:32:01.733 [2024-07-15 14:13:26.249056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:01.991 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 86547 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:32:01.991 [2024-07-15 14:13:26.467803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:02.924 [2024-07-15 14:13:27.267060] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:02.924 [2024-07-15 14:13:27.267141] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:02.924 [2024-07-15 14:13:27.416126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.924 [2024-07-15 14:13:27.416199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:02.924 [2024-07-15 14:13:27.416225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:02.924 [2024-07-15 14:13:27.416238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.924 [2024-07-15 14:13:27.416335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.924 [2024-07-15 14:13:27.416356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:02.924 [2024-07-15 14:13:27.416369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:32:02.924 [2024-07-15 14:13:27.416381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.924 [2024-07-15 14:13:27.416418] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:02.924 [2024-07-15 14:13:27.417415] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:02.924 [2024-07-15 14:13:27.417459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.924 [2024-07-15 14:13:27.417474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:02.924 [2024-07-15 14:13:27.417488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.048 ms 00:32:02.924 [2024-07-15 14:13:27.417500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.924 [2024-07-15 14:13:27.418018] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:32:02.924 [2024-07-15 14:13:27.438666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.924 [2024-07-15 14:13:27.438735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:32:02.924 [2024-07-15 14:13:27.438768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.648 ms 00:32:02.924 [2024-07-15 14:13:27.438791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.924 [2024-07-15 14:13:27.451268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.924 [2024-07-15 14:13:27.451352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:32:02.924 [2024-07-15 14:13:27.451372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:32:02.924 [2024-07-15 14:13:27.451384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.924 [2024-07-15 14:13:27.451927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.924 [2024-07-15 14:13:27.451962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:02.924 [2024-07-15 14:13:27.451985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.408 ms 00:32:02.924 [2024-07-15 14:13:27.451997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.924 [2024-07-15 14:13:27.452072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.924 [2024-07-15 14:13:27.452092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:02.924 [2024-07-15 14:13:27.452105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:32:02.924 [2024-07-15 14:13:27.452117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.924 [2024-07-15 14:13:27.452161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.924 [2024-07-15 14:13:27.452178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:02.924 [2024-07-15 14:13:27.452191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:32:02.924 [2024-07-15 14:13:27.452207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.924 [2024-07-15 14:13:27.452245] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:02.924 [2024-07-15 14:13:27.456423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.924 [2024-07-15 14:13:27.456465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:02.924 [2024-07-15 14:13:27.456481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.187 ms 00:32:02.924 [2024-07-15 14:13:27.456494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.924 [2024-07-15 14:13:27.456534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.924 [2024-07-15 14:13:27.456550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:02.924 [2024-07-15 14:13:27.456563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:02.924 [2024-07-15 14:13:27.456575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.924 [2024-07-15 14:13:27.456627] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:32:02.924 [2024-07-15 14:13:27.456658] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:32:02.924 [2024-07-15 14:13:27.456704] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:32:02.924 [2024-07-15 14:13:27.456726] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:32:02.924 [2024-07-15 14:13:27.456832] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:02.924 [2024-07-15 14:13:27.456848] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:02.924 [2024-07-15 14:13:27.456863] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:32:02.924 [2024-07-15 14:13:27.456878] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:02.924 [2024-07-15 14:13:27.456893] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:02.924 [2024-07-15 14:13:27.456906] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:02.924 [2024-07-15 14:13:27.456918] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:02.924 [2024-07-15 14:13:27.456933] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:02.924 [2024-07-15 14:13:27.456944] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:02.924 [2024-07-15 14:13:27.456956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.924 [2024-07-15 14:13:27.456968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:02.924 [2024-07-15 14:13:27.456985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.333 ms 00:32:02.924 [2024-07-15 14:13:27.456997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.924 [2024-07-15 14:13:27.457089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.924 [2024-07-15 14:13:27.457104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:02.924 [2024-07-15 14:13:27.457116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:32:02.924 [2024-07-15 14:13:27.457128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.924 [2024-07-15 14:13:27.457258] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:02.924 [2024-07-15 14:13:27.457287] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:02.924 [2024-07-15 14:13:27.457320] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:02.924 [2024-07-15 14:13:27.457337] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:02.924 [2024-07-15 14:13:27.457350] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:02.924 [2024-07-15 14:13:27.457361] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:02.924 [2024-07-15 14:13:27.457373] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:02.924 [2024-07-15 14:13:27.457385] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:02.924 [2024-07-15 14:13:27.457396] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:02.924 [2024-07-15 14:13:27.457407] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:02.924 [2024-07-15 14:13:27.457418] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:02.924 [2024-07-15 14:13:27.457429] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:02.924 [2024-07-15 14:13:27.457440] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:02.924 [2024-07-15 14:13:27.457451] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:02.924 [2024-07-15 14:13:27.457462] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:02.924 [2024-07-15 14:13:27.457473] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:02.924 [2024-07-15 14:13:27.457484] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:02.924 [2024-07-15 14:13:27.457496] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:02.924 [2024-07-15 14:13:27.457508] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:02.924 [2024-07-15 14:13:27.457519] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:02.924 [2024-07-15 14:13:27.457530] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:02.924 [2024-07-15 14:13:27.457541] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:02.924 [2024-07-15 14:13:27.457552] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:02.924 [2024-07-15 14:13:27.457563] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:02.924 [2024-07-15 14:13:27.457574] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:02.924 [2024-07-15 14:13:27.457585] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:02.924 [2024-07-15 14:13:27.457596] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:02.924 [2024-07-15 14:13:27.457607] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:02.924 [2024-07-15 14:13:27.457618] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:02.924 [2024-07-15 14:13:27.457629] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:02.924 [2024-07-15 14:13:27.457640] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:02.924 [2024-07-15 14:13:27.457651] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:02.925 [2024-07-15 14:13:27.457662] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:02.925 [2024-07-15 14:13:27.457673] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:02.925 [2024-07-15 14:13:27.457689] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:02.925 [2024-07-15 14:13:27.457700] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:02.925 [2024-07-15 14:13:27.457711] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:02.925 [2024-07-15 14:13:27.457722] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:02.925 [2024-07-15 14:13:27.457733] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:02.925 [2024-07-15 14:13:27.457744] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:02.925 [2024-07-15 14:13:27.457755] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:02.925 [2024-07-15 14:13:27.457765] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:02.925 [2024-07-15 14:13:27.457776] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:02.925 [2024-07-15 14:13:27.457787] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:02.925 [2024-07-15 14:13:27.457799] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:02.925 [2024-07-15 14:13:27.457810] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:02.925 [2024-07-15 14:13:27.457821] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:02.925 [2024-07-15 14:13:27.457833] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:02.925 [2024-07-15 14:13:27.457845] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:02.925 [2024-07-15 14:13:27.457870] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:02.925 [2024-07-15 14:13:27.457883] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:02.925 [2024-07-15 14:13:27.457894] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:02.925 [2024-07-15 14:13:27.457906] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:02.925 [2024-07-15 14:13:27.457919] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:02.925 [2024-07-15 14:13:27.457939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:02.925 [2024-07-15 14:13:27.457952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:02.925 [2024-07-15 14:13:27.457964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:02.925 [2024-07-15 14:13:27.457975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:02.925 [2024-07-15 14:13:27.457987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:02.925 [2024-07-15 14:13:27.457999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:02.925 [2024-07-15 14:13:27.458011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:02.925 [2024-07-15 14:13:27.458023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:02.925 [2024-07-15 14:13:27.458034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:02.925 [2024-07-15 14:13:27.458046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:02.925 [2024-07-15 14:13:27.458057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:02.925 [2024-07-15 14:13:27.458069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:02.925 [2024-07-15 14:13:27.458080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:02.925 [2024-07-15 14:13:27.458092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:02.925 [2024-07-15 14:13:27.458106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:02.925 [2024-07-15 14:13:27.458117] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:02.925 [2024-07-15 14:13:27.458130] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:02.925 [2024-07-15 14:13:27.458143] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:02.925 [2024-07-15 14:13:27.458154] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:02.925 [2024-07-15 14:13:27.458166] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:02.925 [2024-07-15 14:13:27.458178] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:02.925 [2024-07-15 14:13:27.458190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.925 [2024-07-15 14:13:27.458202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:02.925 [2024-07-15 14:13:27.458214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.998 ms 00:32:02.925 [2024-07-15 14:13:27.458226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:03.185 [2024-07-15 14:13:27.489700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:03.185 [2024-07-15 14:13:27.489763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:03.185 [2024-07-15 14:13:27.489783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.398 ms 00:32:03.185 [2024-07-15 14:13:27.489797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:03.185 [2024-07-15 14:13:27.489872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:03.185 [2024-07-15 14:13:27.489889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:03.185 [2024-07-15 14:13:27.489902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:32:03.185 [2024-07-15 14:13:27.489920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:03.185 [2024-07-15 14:13:27.528572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:03.185 [2024-07-15 14:13:27.528635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:03.185 [2024-07-15 14:13:27.528655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.556 ms 00:32:03.185 [2024-07-15 14:13:27.528668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:03.185 [2024-07-15 14:13:27.528745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:03.185 [2024-07-15 14:13:27.528768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:03.185 [2024-07-15 14:13:27.528781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:03.185 [2024-07-15 14:13:27.528794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:03.185 [2024-07-15 14:13:27.528961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:03.185 [2024-07-15 14:13:27.528979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:03.185 [2024-07-15 14:13:27.528993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.088 ms 00:32:03.185 [2024-07-15 14:13:27.529006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:03.185 [2024-07-15 14:13:27.529063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:03.185 [2024-07-15 14:13:27.529078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:03.185 [2024-07-15 14:13:27.529095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:32:03.185 [2024-07-15 14:13:27.529107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:03.185 [2024-07-15 14:13:27.551244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:03.185 [2024-07-15 14:13:27.551387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:03.185 [2024-07-15 14:13:27.551468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.101 ms 00:32:03.185 [2024-07-15 14:13:27.551498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:03.185 [2024-07-15 14:13:27.551823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:03.185 [2024-07-15 14:13:27.551875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:32:03.185 [2024-07-15 14:13:27.551904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:32:03.185 [2024-07-15 14:13:27.551926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:03.185 [2024-07-15 14:13:27.587474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:03.185 [2024-07-15 14:13:27.587552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:32:03.185 [2024-07-15 14:13:27.587575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.484 ms 00:32:03.185 [2024-07-15 14:13:27.587589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:03.185 [2024-07-15 14:13:27.600867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:03.185 [2024-07-15 14:13:27.600953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:03.185 [2024-07-15 14:13:27.600974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.723 ms 00:32:03.185 [2024-07-15 14:13:27.600987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:03.185 [2024-07-15 14:13:27.675729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:03.185 [2024-07-15 14:13:27.675806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:32:03.185 [2024-07-15 14:13:27.675829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 74.629 ms 00:32:03.185 [2024-07-15 14:13:27.675842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:03.185 [2024-07-15 14:13:27.676085] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:32:03.185 [2024-07-15 14:13:27.676248] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:32:03.185 [2024-07-15 14:13:27.676417] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:32:03.185 [2024-07-15 14:13:27.676566] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:32:03.185 [2024-07-15 14:13:27.676583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:03.185 [2024-07-15 14:13:27.676596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:32:03.185 [2024-07-15 14:13:27.676610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.651 ms 00:32:03.185 [2024-07-15 14:13:27.676622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:03.185 [2024-07-15 14:13:27.676730] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:32:03.185 [2024-07-15 14:13:27.676753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:03.185 [2024-07-15 14:13:27.676765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:32:03.185 [2024-07-15 14:13:27.676777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:32:03.185 [2024-07-15 14:13:27.676789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:03.185 [2024-07-15 14:13:27.696430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:03.185 [2024-07-15 14:13:27.696498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:32:03.185 [2024-07-15 14:13:27.696520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.605 ms 00:32:03.185 [2024-07-15 14:13:27.696539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:03.185 [2024-07-15 14:13:27.708771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:03.185 [2024-07-15 14:13:27.708821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:32:03.185 [2024-07-15 14:13:27.708838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:32:03.185 [2024-07-15 14:13:27.708851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:03.185 [2024-07-15 14:13:27.709080] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:32:03.752 [2024-07-15 14:13:28.199407] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:32:03.752 [2024-07-15 14:13:28.199604] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:32:04.320 [2024-07-15 14:13:28.671626] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:32:04.320 [2024-07-15 14:13:28.671766] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:04.320 [2024-07-15 14:13:28.671798] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:32:04.320 [2024-07-15 14:13:28.671815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:04.320 [2024-07-15 14:13:28.671829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:32:04.320 [2024-07-15 14:13:28.671846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 962.851 ms 00:32:04.320 [2024-07-15 14:13:28.671859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:04.320 [2024-07-15 14:13:28.671908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:04.320 [2024-07-15 14:13:28.671924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:32:04.320 [2024-07-15 14:13:28.671938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:04.320 [2024-07-15 14:13:28.671950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:04.320 [2024-07-15 14:13:28.684692] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:04.320 [2024-07-15 14:13:28.684880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:04.320 [2024-07-15 14:13:28.684904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:04.320 [2024-07-15 14:13:28.684922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.905 ms 00:32:04.321 [2024-07-15 14:13:28.684934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:04.321 [2024-07-15 14:13:28.685742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:04.321 [2024-07-15 14:13:28.685777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:32:04.321 [2024-07-15 14:13:28.685793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.662 ms 00:32:04.321 [2024-07-15 14:13:28.685806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:04.321 [2024-07-15 14:13:28.688337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:04.321 [2024-07-15 14:13:28.688372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:32:04.321 [2024-07-15 14:13:28.688388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.504 ms 00:32:04.321 [2024-07-15 14:13:28.688401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:04.321 [2024-07-15 14:13:28.688455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:04.321 [2024-07-15 14:13:28.688472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:32:04.321 [2024-07-15 14:13:28.688485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:04.321 [2024-07-15 14:13:28.688497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:04.321 [2024-07-15 14:13:28.688628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:04.321 [2024-07-15 14:13:28.688655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:04.321 [2024-07-15 14:13:28.688673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:32:04.321 [2024-07-15 14:13:28.688686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:04.321 [2024-07-15 14:13:28.688716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:04.321 [2024-07-15 14:13:28.688730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:04.321 [2024-07-15 14:13:28.688748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:04.321 [2024-07-15 14:13:28.688760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:04.321 [2024-07-15 14:13:28.688802] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:32:04.321 [2024-07-15 14:13:28.688824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:04.321 [2024-07-15 14:13:28.688847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:32:04.321 [2024-07-15 14:13:28.688859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:32:04.321 [2024-07-15 14:13:28.688875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:04.321 [2024-07-15 14:13:28.688938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:04.321 [2024-07-15 14:13:28.688954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:04.321 [2024-07-15 14:13:28.688966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:32:04.321 [2024-07-15 14:13:28.688978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:04.321 [2024-07-15 14:13:28.690181] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1273.558 ms, result 0 00:32:04.321 [2024-07-15 14:13:28.705556] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:04.321 [2024-07-15 14:13:28.721591] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:04.321 [2024-07-15 14:13:28.730627] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:04.580 14:13:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:04.580 14:13:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:32:04.580 14:13:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:04.580 14:13:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:32:04.580 14:13:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:32:04.580 14:13:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:32:04.580 14:13:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:32:04.580 14:13:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:04.580 Validate MD5 checksum, iteration 1 00:32:04.580 14:13:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:32:04.580 14:13:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:04.580 14:13:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:04.580 14:13:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:04.580 14:13:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:04.580 14:13:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:04.580 14:13:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:04.580 [2024-07-15 14:13:29.039901] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:32:04.580 [2024-07-15 14:13:29.040068] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86833 ] 00:32:04.839 [2024-07-15 14:13:29.212904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.115 [2024-07-15 14:13:29.440903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.907  Copying: 408/1024 [MB] (408 MBps) Copying: 783/1024 [MB] (375 MBps) Copying: 1024/1024 [MB] (average 396 MBps) 00:32:09.907 00:32:10.165 14:13:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:32:10.165 14:13:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:12.693 14:13:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:12.693 14:13:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=93566f87d107381cc281042623e08835 00:32:12.693 14:13:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 93566f87d107381cc281042623e08835 != \9\3\5\6\6\f\8\7\d\1\0\7\3\8\1\c\c\2\8\1\0\4\2\6\2\3\e\0\8\8\3\5 ]] 00:32:12.693 14:13:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:12.693 14:13:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:12.693 Validate MD5 checksum, iteration 2 00:32:12.693 14:13:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:32:12.693 14:13:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:12.693 14:13:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:12.693 14:13:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:12.693 14:13:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:12.693 14:13:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:12.693 14:13:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:12.693 [2024-07-15 14:13:36.757090] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:32:12.693 [2024-07-15 14:13:36.757239] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86906 ] 00:32:12.693 [2024-07-15 14:13:36.919756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.693 [2024-07-15 14:13:37.110063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:17.160  Copying: 431/1024 [MB] (431 MBps) Copying: 855/1024 [MB] (424 MBps) Copying: 1024/1024 [MB] (average 417 MBps) 00:32:17.160 00:32:17.160 14:13:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:32:17.160 14:13:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:19.686 14:13:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:19.686 14:13:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=cdf9741576f2c674e423afa98fd87eab 00:32:19.686 14:13:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ cdf9741576f2c674e423afa98fd87eab != \c\d\f\9\7\4\1\5\7\6\f\2\c\6\7\4\e\4\2\3\a\f\a\9\8\f\d\8\7\e\a\b ]] 00:32:19.686 14:13:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:19.686 14:13:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:19.686 14:13:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:32:19.686 14:13:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:32:19.686 14:13:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:32:19.686 14:13:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:19.686 14:13:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:32:19.686 14:13:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:32:19.686 14:13:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:32:19.686 14:13:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:32:19.686 14:13:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 86798 ]] 00:32:19.686 14:13:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 86798 00:32:19.686 14:13:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 86798 ']' 00:32:19.686 14:13:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 86798 00:32:19.686 14:13:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:32:19.686 14:13:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:19.687 14:13:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86798 00:32:19.687 14:13:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:19.687 killing process with pid 86798 00:32:19.687 14:13:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:19.687 14:13:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86798' 00:32:19.687 14:13:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 86798 00:32:19.687 14:13:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 86798 00:32:20.620 [2024-07-15 14:13:45.145842] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:32:20.620 [2024-07-15 14:13:45.163850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:20.620 [2024-07-15 14:13:45.163932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:32:20.620 [2024-07-15 14:13:45.163953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:20.620 [2024-07-15 14:13:45.163967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.620 [2024-07-15 14:13:45.164000] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:32:20.620 [2024-07-15 14:13:45.167443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:20.620 [2024-07-15 14:13:45.167492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:32:20.620 [2024-07-15 14:13:45.167510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.416 ms 00:32:20.620 [2024-07-15 14:13:45.167523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.878 [2024-07-15 14:13:45.167797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:20.878 [2024-07-15 14:13:45.167836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:32:20.878 [2024-07-15 14:13:45.167861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.230 ms 00:32:20.878 [2024-07-15 14:13:45.167874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.878 [2024-07-15 14:13:45.169139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:20.878 [2024-07-15 14:13:45.169185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:32:20.878 [2024-07-15 14:13:45.169204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.236 ms 00:32:20.878 [2024-07-15 14:13:45.169217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.878 [2024-07-15 14:13:45.170528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:20.878 [2024-07-15 14:13:45.170564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:32:20.878 [2024-07-15 14:13:45.170580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.263 ms 00:32:20.878 [2024-07-15 14:13:45.170601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.878 [2024-07-15 14:13:45.183590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:20.878 [2024-07-15 14:13:45.183696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:32:20.878 [2024-07-15 14:13:45.183720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.876 ms 00:32:20.878 [2024-07-15 14:13:45.183734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.878 [2024-07-15 14:13:45.190833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:20.878 [2024-07-15 14:13:45.190928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:32:20.878 [2024-07-15 14:13:45.190965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.989 ms 00:32:20.878 [2024-07-15 14:13:45.190978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.878 [2024-07-15 14:13:45.191109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:20.878 [2024-07-15 14:13:45.191131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:32:20.878 [2024-07-15 14:13:45.191150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:32:20.878 [2024-07-15 14:13:45.191169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.878 [2024-07-15 14:13:45.204675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:20.878 [2024-07-15 14:13:45.204776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:32:20.878 [2024-07-15 14:13:45.204810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.459 ms 00:32:20.878 [2024-07-15 14:13:45.204829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.878 [2024-07-15 14:13:45.219052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:20.878 [2024-07-15 14:13:45.219134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:32:20.878 [2024-07-15 14:13:45.219157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.144 ms 00:32:20.878 [2024-07-15 14:13:45.219170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.878 [2024-07-15 14:13:45.231850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:20.878 [2024-07-15 14:13:45.231934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:32:20.878 [2024-07-15 14:13:45.231957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.610 ms 00:32:20.878 [2024-07-15 14:13:45.231970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.878 [2024-07-15 14:13:45.244794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:20.878 [2024-07-15 14:13:45.244880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:32:20.878 [2024-07-15 14:13:45.244901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.701 ms 00:32:20.878 [2024-07-15 14:13:45.244914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.878 [2024-07-15 14:13:45.244984] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:32:20.879 [2024-07-15 14:13:45.245011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:20.879 [2024-07-15 14:13:45.245026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:32:20.879 [2024-07-15 14:13:45.245038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:32:20.879 [2024-07-15 14:13:45.245051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:20.879 [2024-07-15 14:13:45.245064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:20.879 [2024-07-15 14:13:45.245076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:20.879 [2024-07-15 14:13:45.245088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:20.879 [2024-07-15 14:13:45.245103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:20.879 [2024-07-15 14:13:45.245123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:20.879 [2024-07-15 14:13:45.245144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:20.879 [2024-07-15 14:13:45.245166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:20.879 [2024-07-15 14:13:45.245187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:20.879 [2024-07-15 14:13:45.245200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:20.879 [2024-07-15 14:13:45.245212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:20.879 [2024-07-15 14:13:45.245226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:20.879 [2024-07-15 14:13:45.245247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:20.879 [2024-07-15 14:13:45.245266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:20.879 [2024-07-15 14:13:45.245279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:20.879 [2024-07-15 14:13:45.245294] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:32:20.879 [2024-07-15 14:13:45.245346] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 3e7932a1-a53c-4831-92f5-d4a8dbaa5201 00:32:20.879 [2024-07-15 14:13:45.245367] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:32:20.879 [2024-07-15 14:13:45.245390] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:32:20.879 [2024-07-15 14:13:45.245408] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:32:20.879 [2024-07-15 14:13:45.245430] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:32:20.879 [2024-07-15 14:13:45.245444] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:32:20.879 [2024-07-15 14:13:45.245462] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:32:20.879 [2024-07-15 14:13:45.245483] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:32:20.879 [2024-07-15 14:13:45.245503] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:32:20.879 [2024-07-15 14:13:45.245519] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:32:20.879 [2024-07-15 14:13:45.245532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:20.879 [2024-07-15 14:13:45.245544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:32:20.879 [2024-07-15 14:13:45.245558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.550 ms 00:32:20.879 [2024-07-15 14:13:45.245571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.879 [2024-07-15 14:13:45.262547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:20.879 [2024-07-15 14:13:45.262618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:32:20.879 [2024-07-15 14:13:45.262651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.939 ms 00:32:20.879 [2024-07-15 14:13:45.262677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.879 [2024-07-15 14:13:45.263230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:20.879 [2024-07-15 14:13:45.263271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:32:20.879 [2024-07-15 14:13:45.263288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.468 ms 00:32:20.879 [2024-07-15 14:13:45.263318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.879 [2024-07-15 14:13:45.315698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:20.879 [2024-07-15 14:13:45.315773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:20.879 [2024-07-15 14:13:45.315792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:20.879 [2024-07-15 14:13:45.315805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.879 [2024-07-15 14:13:45.315869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:20.879 [2024-07-15 14:13:45.315886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:20.879 [2024-07-15 14:13:45.315899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:20.879 [2024-07-15 14:13:45.315910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.879 [2024-07-15 14:13:45.316065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:20.879 [2024-07-15 14:13:45.316089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:20.879 [2024-07-15 14:13:45.316112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:20.879 [2024-07-15 14:13:45.316134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.879 [2024-07-15 14:13:45.316166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:20.879 [2024-07-15 14:13:45.316180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:20.879 [2024-07-15 14:13:45.316195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:20.879 [2024-07-15 14:13:45.316212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.879 [2024-07-15 14:13:45.416245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:20.879 [2024-07-15 14:13:45.416324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:20.879 [2024-07-15 14:13:45.416344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:20.879 [2024-07-15 14:13:45.416358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.137 [2024-07-15 14:13:45.503793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:21.137 [2024-07-15 14:13:45.503863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:21.137 [2024-07-15 14:13:45.503894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:21.137 [2024-07-15 14:13:45.503914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.137 [2024-07-15 14:13:45.504069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:21.137 [2024-07-15 14:13:45.504100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:21.137 [2024-07-15 14:13:45.504123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:21.137 [2024-07-15 14:13:45.504144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.137 [2024-07-15 14:13:45.504236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:21.137 [2024-07-15 14:13:45.504281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:21.137 [2024-07-15 14:13:45.504324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:21.137 [2024-07-15 14:13:45.504350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.137 [2024-07-15 14:13:45.504537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:21.137 [2024-07-15 14:13:45.504592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:21.137 [2024-07-15 14:13:45.504617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:21.137 [2024-07-15 14:13:45.504639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.137 [2024-07-15 14:13:45.504722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:21.137 [2024-07-15 14:13:45.504751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:32:21.137 [2024-07-15 14:13:45.504774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:21.137 [2024-07-15 14:13:45.504795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.137 [2024-07-15 14:13:45.504876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:21.137 [2024-07-15 14:13:45.504911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:21.137 [2024-07-15 14:13:45.504934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:21.137 [2024-07-15 14:13:45.504954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.137 [2024-07-15 14:13:45.505041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:21.137 [2024-07-15 14:13:45.505083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:21.137 [2024-07-15 14:13:45.505107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:21.137 [2024-07-15 14:13:45.505128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.137 [2024-07-15 14:13:45.505380] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 341.442 ms, result 0 00:32:22.509 14:13:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:32:22.509 14:13:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:22.509 14:13:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:32:22.509 14:13:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:32:22.509 14:13:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:32:22.509 14:13:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:22.509 14:13:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:32:22.509 Remove shared memory files 00:32:22.509 14:13:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:22.509 14:13:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:32:22.509 14:13:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:32:22.509 14:13:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid86547 00:32:22.509 14:13:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:22.509 14:13:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:32:22.509 00:32:22.509 real 1m39.328s 00:32:22.509 user 2m23.640s 00:32:22.509 sys 0m24.344s 00:32:22.509 14:13:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:22.509 14:13:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:22.509 ************************************ 00:32:22.509 END TEST ftl_upgrade_shutdown 00:32:22.509 ************************************ 00:32:22.509 14:13:46 ftl -- common/autotest_common.sh@1142 -- # return 0 00:32:22.509 14:13:46 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:32:22.509 14:13:46 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:32:22.509 14:13:46 ftl -- ftl/ftl.sh@14 -- # killprocess 79468 00:32:22.509 14:13:46 ftl -- common/autotest_common.sh@948 -- # '[' -z 79468 ']' 00:32:22.509 14:13:46 ftl -- common/autotest_common.sh@952 -- # kill -0 79468 00:32:22.509 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (79468) - No such process 00:32:22.509 Process with pid 79468 is not found 00:32:22.509 14:13:46 ftl -- common/autotest_common.sh@975 -- # echo 'Process with pid 79468 is not found' 00:32:22.509 14:13:46 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:32:22.509 14:13:46 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=87039 00:32:22.509 14:13:46 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:22.509 14:13:46 ftl -- ftl/ftl.sh@20 -- # waitforlisten 87039 00:32:22.509 14:13:46 ftl -- common/autotest_common.sh@829 -- # '[' -z 87039 ']' 00:32:22.509 14:13:46 ftl -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:22.509 14:13:46 ftl -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:22.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:22.509 14:13:46 ftl -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:22.509 14:13:46 ftl -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:22.509 14:13:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:22.509 [2024-07-15 14:13:46.840516] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:32:22.509 [2024-07-15 14:13:46.840677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87039 ] 00:32:22.509 [2024-07-15 14:13:47.002120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.766 [2024-07-15 14:13:47.278318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.701 14:13:47 ftl -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:23.701 14:13:47 ftl -- common/autotest_common.sh@862 -- # return 0 00:32:23.701 14:13:47 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:23.959 nvme0n1 00:32:23.959 14:13:48 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:32:23.959 14:13:48 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:23.959 14:13:48 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:24.217 14:13:48 ftl -- ftl/common.sh@28 -- # stores=23b3a468-cf0d-4486-a90f-3c5465f137e1 00:32:24.217 14:13:48 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:32:24.217 14:13:48 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 23b3a468-cf0d-4486-a90f-3c5465f137e1 00:32:24.476 14:13:48 ftl -- ftl/ftl.sh@23 -- # killprocess 87039 00:32:24.476 14:13:48 ftl -- common/autotest_common.sh@948 -- # '[' -z 87039 ']' 00:32:24.476 14:13:48 ftl -- common/autotest_common.sh@952 -- # kill -0 87039 00:32:24.476 14:13:48 ftl -- common/autotest_common.sh@953 -- # uname 00:32:24.476 14:13:48 ftl -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:24.476 14:13:48 ftl -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87039 00:32:24.476 14:13:48 ftl -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:24.476 14:13:48 ftl -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:24.476 killing process with pid 87039 00:32:24.476 14:13:48 ftl -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87039' 00:32:24.476 14:13:48 ftl -- common/autotest_common.sh@967 -- # kill 87039 00:32:24.476 14:13:48 ftl -- common/autotest_common.sh@972 -- # wait 87039 00:32:27.006 14:13:51 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:27.006 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:27.006 Waiting for block devices as requested 00:32:27.006 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:27.006 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:27.006 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:27.263 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:32.525 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:32.525 14:13:56 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:32:32.525 Remove shared memory files 00:32:32.525 14:13:56 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:32.525 14:13:56 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:32:32.525 14:13:56 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:32:32.525 14:13:56 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:32:32.525 14:13:56 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:32.525 14:13:56 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:32:32.525 ************************************ 00:32:32.525 END TEST ftl 00:32:32.525 ************************************ 00:32:32.525 00:32:32.525 real 11m30.929s 00:32:32.525 user 14m35.172s 00:32:32.525 sys 1m32.150s 00:32:32.525 14:13:56 ftl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:32.525 14:13:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:32.525 14:13:56 -- common/autotest_common.sh@1142 -- # return 0 00:32:32.525 14:13:56 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:32.525 14:13:56 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:32:32.525 14:13:56 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:32:32.525 14:13:56 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:32:32.525 14:13:56 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:32:32.525 14:13:56 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:32.525 14:13:56 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:32.525 14:13:56 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:32:32.525 14:13:56 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:32:32.525 14:13:56 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:32:32.525 14:13:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:32.525 14:13:56 -- common/autotest_common.sh@10 -- # set +x 00:32:32.525 14:13:56 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:32:32.525 14:13:56 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:32.525 14:13:56 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:32.525 14:13:56 -- common/autotest_common.sh@10 -- # set +x 00:32:33.461 INFO: APP EXITING 00:32:33.461 INFO: killing all VMs 00:32:33.461 INFO: killing vhost app 00:32:33.461 INFO: EXIT DONE 00:32:33.719 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:33.977 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:32:34.234 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:32:34.234 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:32:34.234 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:32:34.492 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:34.748 Cleaning 00:32:34.748 Removing: /var/run/dpdk/spdk0/config 00:32:34.748 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:34.748 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:34.748 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:34.748 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:34.748 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:34.748 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:34.748 Removing: /var/run/dpdk/spdk0 00:32:34.748 Removing: /var/run/dpdk/spdk_pid62226 00:32:34.748 Removing: /var/run/dpdk/spdk_pid62442 00:32:34.748 Removing: /var/run/dpdk/spdk_pid62658 00:32:34.748 Removing: /var/run/dpdk/spdk_pid62764 00:32:34.748 Removing: /var/run/dpdk/spdk_pid62820 00:32:34.748 Removing: /var/run/dpdk/spdk_pid62949 00:32:34.748 Removing: /var/run/dpdk/spdk_pid62967 00:32:34.748 Removing: /var/run/dpdk/spdk_pid63155 00:32:34.748 Removing: /var/run/dpdk/spdk_pid63247 00:32:34.748 Removing: /var/run/dpdk/spdk_pid63340 00:32:34.748 Removing: /var/run/dpdk/spdk_pid63455 00:32:34.748 Removing: /var/run/dpdk/spdk_pid63550 00:32:35.004 Removing: /var/run/dpdk/spdk_pid63595 00:32:35.004 Removing: /var/run/dpdk/spdk_pid63637 00:32:35.004 Removing: /var/run/dpdk/spdk_pid63705 00:32:35.004 Removing: /var/run/dpdk/spdk_pid63789 00:32:35.004 Removing: /var/run/dpdk/spdk_pid64257 00:32:35.004 Removing: /var/run/dpdk/spdk_pid64332 00:32:35.004 Removing: /var/run/dpdk/spdk_pid64406 00:32:35.004 Removing: /var/run/dpdk/spdk_pid64422 00:32:35.004 Removing: /var/run/dpdk/spdk_pid64565 00:32:35.004 Removing: /var/run/dpdk/spdk_pid64586 00:32:35.004 Removing: /var/run/dpdk/spdk_pid64729 00:32:35.004 Removing: /var/run/dpdk/spdk_pid64750 00:32:35.004 Removing: /var/run/dpdk/spdk_pid64820 00:32:35.004 Removing: /var/run/dpdk/spdk_pid64838 00:32:35.004 Removing: /var/run/dpdk/spdk_pid64902 00:32:35.004 Removing: /var/run/dpdk/spdk_pid64920 00:32:35.004 Removing: /var/run/dpdk/spdk_pid65107 00:32:35.004 Removing: /var/run/dpdk/spdk_pid65148 00:32:35.004 Removing: /var/run/dpdk/spdk_pid65225 00:32:35.004 Removing: /var/run/dpdk/spdk_pid65306 00:32:35.004 Removing: /var/run/dpdk/spdk_pid65337 00:32:35.004 Removing: /var/run/dpdk/spdk_pid65415 00:32:35.004 Removing: /var/run/dpdk/spdk_pid65456 00:32:35.004 Removing: /var/run/dpdk/spdk_pid65508 00:32:35.004 Removing: /var/run/dpdk/spdk_pid65549 00:32:35.004 Removing: /var/run/dpdk/spdk_pid65596 00:32:35.004 Removing: /var/run/dpdk/spdk_pid65642 00:32:35.004 Removing: /var/run/dpdk/spdk_pid65689 00:32:35.004 Removing: /var/run/dpdk/spdk_pid65734 00:32:35.004 Removing: /var/run/dpdk/spdk_pid65782 00:32:35.004 Removing: /var/run/dpdk/spdk_pid65823 00:32:35.004 Removing: /var/run/dpdk/spdk_pid65875 00:32:35.004 Removing: /var/run/dpdk/spdk_pid65916 00:32:35.004 Removing: /var/run/dpdk/spdk_pid65963 00:32:35.004 Removing: /var/run/dpdk/spdk_pid66009 00:32:35.004 Removing: /var/run/dpdk/spdk_pid66050 00:32:35.004 Removing: /var/run/dpdk/spdk_pid66097 00:32:35.004 Removing: /var/run/dpdk/spdk_pid66143 00:32:35.004 Removing: /var/run/dpdk/spdk_pid66193 00:32:35.004 Removing: /var/run/dpdk/spdk_pid66248 00:32:35.005 Removing: /var/run/dpdk/spdk_pid66289 00:32:35.005 Removing: /var/run/dpdk/spdk_pid66337 00:32:35.005 Removing: /var/run/dpdk/spdk_pid66419 00:32:35.005 Removing: /var/run/dpdk/spdk_pid66535 00:32:35.005 Removing: /var/run/dpdk/spdk_pid66702 00:32:35.005 Removing: /var/run/dpdk/spdk_pid66798 00:32:35.005 Removing: /var/run/dpdk/spdk_pid66840 00:32:35.005 Removing: /var/run/dpdk/spdk_pid67313 00:32:35.005 Removing: /var/run/dpdk/spdk_pid67410 00:32:35.005 Removing: /var/run/dpdk/spdk_pid67526 00:32:35.005 Removing: /var/run/dpdk/spdk_pid67585 00:32:35.005 Removing: /var/run/dpdk/spdk_pid67616 00:32:35.005 Removing: /var/run/dpdk/spdk_pid67692 00:32:35.005 Removing: /var/run/dpdk/spdk_pid68319 00:32:35.005 Removing: /var/run/dpdk/spdk_pid68367 00:32:35.005 Removing: /var/run/dpdk/spdk_pid68882 00:32:35.005 Removing: /var/run/dpdk/spdk_pid68986 00:32:35.005 Removing: /var/run/dpdk/spdk_pid69103 00:32:35.005 Removing: /var/run/dpdk/spdk_pid69162 00:32:35.005 Removing: /var/run/dpdk/spdk_pid69193 00:32:35.005 Removing: /var/run/dpdk/spdk_pid69224 00:32:35.005 Removing: /var/run/dpdk/spdk_pid71082 00:32:35.005 Removing: /var/run/dpdk/spdk_pid71230 00:32:35.005 Removing: /var/run/dpdk/spdk_pid71234 00:32:35.005 Removing: /var/run/dpdk/spdk_pid71246 00:32:35.005 Removing: /var/run/dpdk/spdk_pid71292 00:32:35.005 Removing: /var/run/dpdk/spdk_pid71296 00:32:35.005 Removing: /var/run/dpdk/spdk_pid71308 00:32:35.005 Removing: /var/run/dpdk/spdk_pid71353 00:32:35.005 Removing: /var/run/dpdk/spdk_pid71357 00:32:35.005 Removing: /var/run/dpdk/spdk_pid71369 00:32:35.005 Removing: /var/run/dpdk/spdk_pid71408 00:32:35.005 Removing: /var/run/dpdk/spdk_pid71418 00:32:35.005 Removing: /var/run/dpdk/spdk_pid71430 00:32:35.005 Removing: /var/run/dpdk/spdk_pid72770 00:32:35.005 Removing: /var/run/dpdk/spdk_pid72870 00:32:35.005 Removing: /var/run/dpdk/spdk_pid74271 00:32:35.005 Removing: /var/run/dpdk/spdk_pid75628 00:32:35.005 Removing: /var/run/dpdk/spdk_pid75754 00:32:35.005 Removing: /var/run/dpdk/spdk_pid75875 00:32:35.005 Removing: /var/run/dpdk/spdk_pid76004 00:32:35.005 Removing: /var/run/dpdk/spdk_pid76142 00:32:35.005 Removing: /var/run/dpdk/spdk_pid76224 00:32:35.005 Removing: /var/run/dpdk/spdk_pid76366 00:32:35.005 Removing: /var/run/dpdk/spdk_pid76731 00:32:35.005 Removing: /var/run/dpdk/spdk_pid76772 00:32:35.005 Removing: /var/run/dpdk/spdk_pid77255 00:32:35.005 Removing: /var/run/dpdk/spdk_pid77438 00:32:35.005 Removing: /var/run/dpdk/spdk_pid77541 00:32:35.005 Removing: /var/run/dpdk/spdk_pid77651 00:32:35.005 Removing: /var/run/dpdk/spdk_pid77710 00:32:35.005 Removing: /var/run/dpdk/spdk_pid77736 00:32:35.005 Removing: /var/run/dpdk/spdk_pid78033 00:32:35.005 Removing: /var/run/dpdk/spdk_pid78088 00:32:35.005 Removing: /var/run/dpdk/spdk_pid78166 00:32:35.005 Removing: /var/run/dpdk/spdk_pid78550 00:32:35.005 Removing: /var/run/dpdk/spdk_pid78691 00:32:35.005 Removing: /var/run/dpdk/spdk_pid79468 00:32:35.005 Removing: /var/run/dpdk/spdk_pid79611 00:32:35.005 Removing: /var/run/dpdk/spdk_pid79813 00:32:35.005 Removing: /var/run/dpdk/spdk_pid79910 00:32:35.005 Removing: /var/run/dpdk/spdk_pid80275 00:32:35.005 Removing: /var/run/dpdk/spdk_pid80551 00:32:35.005 Removing: /var/run/dpdk/spdk_pid80893 00:32:35.005 Removing: /var/run/dpdk/spdk_pid81097 00:32:35.005 Removing: /var/run/dpdk/spdk_pid81222 00:32:35.005 Removing: /var/run/dpdk/spdk_pid81284 00:32:35.005 Removing: /var/run/dpdk/spdk_pid81422 00:32:35.005 Removing: /var/run/dpdk/spdk_pid81454 00:32:35.296 Removing: /var/run/dpdk/spdk_pid81519 00:32:35.296 Removing: /var/run/dpdk/spdk_pid81715 00:32:35.296 Removing: /var/run/dpdk/spdk_pid81946 00:32:35.296 Removing: /var/run/dpdk/spdk_pid82322 00:32:35.296 Removing: /var/run/dpdk/spdk_pid82770 00:32:35.296 Removing: /var/run/dpdk/spdk_pid83168 00:32:35.296 Removing: /var/run/dpdk/spdk_pid83640 00:32:35.296 Removing: /var/run/dpdk/spdk_pid83784 00:32:35.296 Removing: /var/run/dpdk/spdk_pid83894 00:32:35.296 Removing: /var/run/dpdk/spdk_pid84527 00:32:35.296 Removing: /var/run/dpdk/spdk_pid84611 00:32:35.296 Removing: /var/run/dpdk/spdk_pid85049 00:32:35.296 Removing: /var/run/dpdk/spdk_pid85455 00:32:35.296 Removing: /var/run/dpdk/spdk_pid85949 00:32:35.296 Removing: /var/run/dpdk/spdk_pid86066 00:32:35.296 Removing: /var/run/dpdk/spdk_pid86119 00:32:35.296 Removing: /var/run/dpdk/spdk_pid86189 00:32:35.296 Removing: /var/run/dpdk/spdk_pid86256 00:32:35.296 Removing: /var/run/dpdk/spdk_pid86326 00:32:35.296 Removing: /var/run/dpdk/spdk_pid86547 00:32:35.296 Removing: /var/run/dpdk/spdk_pid86622 00:32:35.296 Removing: /var/run/dpdk/spdk_pid86703 00:32:35.296 Removing: /var/run/dpdk/spdk_pid86798 00:32:35.296 Removing: /var/run/dpdk/spdk_pid86833 00:32:35.296 Removing: /var/run/dpdk/spdk_pid86906 00:32:35.296 Removing: /var/run/dpdk/spdk_pid87039 00:32:35.296 Clean 00:32:35.296 14:13:59 -- common/autotest_common.sh@1451 -- # return 0 00:32:35.296 14:13:59 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:32:35.296 14:13:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:35.296 14:13:59 -- common/autotest_common.sh@10 -- # set +x 00:32:35.296 14:13:59 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:32:35.296 14:13:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:35.296 14:13:59 -- common/autotest_common.sh@10 -- # set +x 00:32:35.296 14:13:59 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:35.296 14:13:59 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:35.296 14:13:59 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:35.296 14:13:59 -- spdk/autotest.sh@391 -- # hash lcov 00:32:35.296 14:13:59 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:35.296 14:13:59 -- spdk/autotest.sh@393 -- # hostname 00:32:35.296 14:13:59 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:35.558 geninfo: WARNING: invalid characters removed from testname! 00:33:07.627 14:14:27 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:07.627 14:14:31 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:10.168 14:14:34 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:13.448 14:14:37 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:15.973 14:14:40 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:19.260 14:14:43 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:21.846 14:14:46 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:21.846 14:14:46 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:21.846 14:14:46 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:21.846 14:14:46 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:21.846 14:14:46 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:21.846 14:14:46 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.846 14:14:46 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.846 14:14:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.846 14:14:46 -- paths/export.sh@5 -- $ export PATH 00:33:21.846 14:14:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.846 14:14:46 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:33:21.846 14:14:46 -- common/autobuild_common.sh@444 -- $ date +%s 00:33:21.846 14:14:46 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721052886.XXXXXX 00:33:21.846 14:14:46 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721052886.bTGCFY 00:33:21.846 14:14:46 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:33:21.846 14:14:46 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:33:21.846 14:14:46 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:33:21.846 14:14:46 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:33:21.846 14:14:46 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:33:21.846 14:14:46 -- common/autobuild_common.sh@460 -- $ get_config_params 00:33:21.846 14:14:46 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:33:21.846 14:14:46 -- common/autotest_common.sh@10 -- $ set +x 00:33:21.846 14:14:46 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:33:21.846 14:14:46 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:33:21.846 14:14:46 -- pm/common@17 -- $ local monitor 00:33:21.846 14:14:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:21.846 14:14:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:21.846 14:14:46 -- pm/common@25 -- $ sleep 1 00:33:21.846 14:14:46 -- pm/common@21 -- $ date +%s 00:33:21.846 14:14:46 -- pm/common@21 -- $ date +%s 00:33:21.847 14:14:46 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721052886 00:33:21.847 14:14:46 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721052886 00:33:21.847 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721052886_collect-vmstat.pm.log 00:33:21.847 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721052886_collect-cpu-load.pm.log 00:33:22.779 14:14:47 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:33:22.779 14:14:47 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:33:22.779 14:14:47 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:33:22.779 14:14:47 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:22.779 14:14:47 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:22.779 14:14:47 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:22.779 14:14:47 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:22.779 14:14:47 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:22.779 14:14:47 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:22.779 14:14:47 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:22.779 14:14:47 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:22.779 14:14:47 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:22.779 14:14:47 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:22.779 14:14:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:22.779 14:14:47 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:33:22.779 14:14:47 -- pm/common@44 -- $ pid=88740 00:33:22.779 14:14:47 -- pm/common@50 -- $ kill -TERM 88740 00:33:22.779 14:14:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:22.779 14:14:47 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:33:22.779 14:14:47 -- pm/common@44 -- $ pid=88741 00:33:22.779 14:14:47 -- pm/common@50 -- $ kill -TERM 88741 00:33:22.779 + [[ -n 5202 ]] 00:33:22.779 + sudo kill 5202 00:33:22.788 [Pipeline] } 00:33:22.809 [Pipeline] // timeout 00:33:22.814 [Pipeline] } 00:33:22.834 [Pipeline] // stage 00:33:22.840 [Pipeline] } 00:33:22.860 [Pipeline] // catchError 00:33:22.870 [Pipeline] stage 00:33:22.873 [Pipeline] { (Stop VM) 00:33:22.888 [Pipeline] sh 00:33:23.183 + vagrant halt 00:33:27.392 ==> default: Halting domain... 00:33:32.683 [Pipeline] sh 00:33:32.963 + vagrant destroy -f 00:33:37.148 ==> default: Removing domain... 00:33:37.416 [Pipeline] sh 00:33:37.693 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:33:37.704 [Pipeline] } 00:33:37.728 [Pipeline] // stage 00:33:37.734 [Pipeline] } 00:33:37.751 [Pipeline] // dir 00:33:37.759 [Pipeline] } 00:33:37.778 [Pipeline] // wrap 00:33:37.785 [Pipeline] } 00:33:37.803 [Pipeline] // catchError 00:33:37.813 [Pipeline] stage 00:33:37.815 [Pipeline] { (Epilogue) 00:33:37.830 [Pipeline] sh 00:33:38.110 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:46.230 [Pipeline] catchError 00:33:46.232 [Pipeline] { 00:33:46.248 [Pipeline] sh 00:33:46.532 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:46.802 Artifacts sizes are good 00:33:46.812 [Pipeline] } 00:33:46.830 [Pipeline] // catchError 00:33:46.844 [Pipeline] archiveArtifacts 00:33:46.850 Archiving artifacts 00:33:47.004 [Pipeline] cleanWs 00:33:47.017 [WS-CLEANUP] Deleting project workspace... 00:33:47.017 [WS-CLEANUP] Deferred wipeout is used... 00:33:47.024 [WS-CLEANUP] done 00:33:47.026 [Pipeline] } 00:33:47.048 [Pipeline] // stage 00:33:47.056 [Pipeline] } 00:33:47.075 [Pipeline] // node 00:33:47.082 [Pipeline] End of Pipeline 00:33:47.126 Finished: SUCCESS